Assume that we are providing advice to a website provider who is looking for tools to automatically label images provided by end users. As we look across the factors in the study, making recommendations to management about image classification, we are most concerned about achieving the highest possible accuracy in image classification. That is, we should be willing to sacrifice training time for model accuracy. What type of machine learning model works best? If it is a convolutional neural network, what type of network should we use? Part of this recommendation may concern information about the initial images themselves (input data for the classification task). What types of images work best?
Management recommendations:
I believe that we should not sacrifice training time for model accuracy. It appears that Keras 2D convolutional model works the best for binary image classification specifically the following model with 50 epochs:
model = Sequential()
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(200, 200, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_uniform'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=SGD(learning_rate=0.001, momentum=0.9), loss='binary_crossentropy', metrics=['accuracy', tensorflow.keras.metrics.categorical_accuracy)
The types of images that work best are ones with a single animal in it as well as a clear shot with the head of the animal.
import pandas as pd
pd.read_csv('c:/users/rocchm1/documents/KNN_Neural_Tests.csv')
| Test # | Number of Layers | Processing Time | Training Set Accuracy | Test Set Accuracy | Epochs | Note | |
|---|---|---|---|---|---|---|---|
| 0 | 1 | 18 | 223 seconds | 0.8985 | 3.71867 Kaggle Score | 30 | Network Architecture 1. Good but can be improved |
| 1 | 2 | 19 | 446 seconds | 0.9877 | 7.65715 Kaggle Score | 30 | Network Architecture 1. Overtrained |
| 2 | 3 | 16 | 1378 Seconds | 0.9753 | 5.45001 Kaggle Score | 90 | Network Architecture 2. Overtrained |
| 3 | 4 | 14 | 754 Seconds | 0.9244 | 2.50312 Kaggle Score | 35 | Network Architecture 2. Good but not as good a... |
| 4 | 5 | 14 | 1004 Seconds | 0.9329 | 2.22274 Kaggle Score | 35 | Network Architecture 2. Increased desnsity of ... |
from google.colab import files
files.upload()
Saving kaggle.json to kaggle.json
{'kaggle.json': b'{"username":"michaelrocchio","key":"fcb9d1568595e76eab4ba8e2b41f9ff4"}'}
!mv /content/kaggle.json /root/.kaggle/kaggle.json
ls /root/.kaggle/
kaggle.json
import kaggle
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
!kaggle competitions download -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
sample_submission.csv: Skipping, found more recently modified local copy (use --force to force download)
train.zip: Skipping, found more recently modified local copy (use --force to force download)
test.zip: Skipping, found more recently modified local copy (use --force to force download)
import zipfile
from zipfile import *
zip_test = ZipFile('/content/test.zip')
zip_test.extractall()
zip_train = ZipFile('/content/train.zip')
zip_train.extractall()
import os
import pandas as pd
import numpy as np
jpg_list=os.listdir('/content/train')
labels_list=pd.DataFrame({
'label': jpg_list,
'filename': jpg_list
})
labels_list['label']=labels_list['label'].str[0:3]
labels_list['y']=np.where(labels_list['label']=='dog',1,0)
labels_list
| label | filename | y | |
|---|---|---|---|
| 0 | cat | cat.9673.jpg | 0 |
| 1 | cat | cat.3257.jpg | 0 |
| 2 | dog | dog.11897.jpg | 1 |
| 3 | dog | dog.8137.jpg | 1 |
| 4 | dog | dog.2316.jpg | 1 |
| ... | ... | ... | ... |
| 24995 | dog | dog.12438.jpg | 1 |
| 24996 | dog | dog.10947.jpg | 1 |
| 24997 | dog | dog.6583.jpg | 1 |
| 24998 | cat | cat.4690.jpg | 0 |
| 24999 | cat | cat.1629.jpg | 0 |
25000 rows × 3 columns
import sys
from PIL import Image
import time
import glob
from IPython.display import clear_output
import cv2
from numpy import asarray
jpg_list_train=jpg_list
x = [] # images as arrays
y = [] # labels
end1=0
for image in jpg_list_train:
x.append(cv2.resize(cv2.imread('/content/train/{}'.format(image)), (150,150), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_train))),2))
if 'dog' in image:
y.append(1)
elif 'cat' in image:
y.append(0)
Images Processed: 25000
Percent Complete: 100.0
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array, load_img
from keras import layers, models, optimizers
from keras import backend as K
from numpy import asarray
x=asarray(x, dtype='float32')
y=asarray(y, dtype='float32')
# x=np.transpose(x, (2, 1, 3, 0))
x.shape
(25000, 150, 150, 3)
y.shape
(25000,)
X_train = x[2100:,:,:,:]
X_val = x[:2100,:,:,:]
y_train = y[2100:]
y_val = y[:2100]
X_train.shape
del x
del y
X_train.shape
(22900, 150, 150, 3)
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
import tensorflow
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
model=keras.models.Sequential()
model.add(keras.layers.Conv2D(32,3,3,padding = 'same', activation ='relu', input_shape=(150,150,3)))
model.add(keras.layers.Conv2D(32,3,3,padding = "same", activation = "relu"))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), padding = "same"))
model.add(keras.layers.Conv2D(64,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.Conv2D(64,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), padding = "same"))
model.add(keras.layers.Conv2D(128,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.Conv2D(128,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), padding = "same"))
model.add(keras.layers.Conv2D(256,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.Conv2D(256,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), padding = "same"))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(256,activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(256,activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(1))
model.add(keras.layers.Activation('sigmoid'))
model.summary()
keras.backend.clear_session()
# model = Sequential([
# keras.layers.Convolution2D(32, 3, 3, input_shape=(150, 150, 3), activation='relu'),
# keras.layers.Convolution2D(32, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Convolution2D(64, 3, 3, padding='same', activation='relu'),
# keras.layers.Convolution2D(64, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Convolution2D(128, 3, 3, padding='same', activation='relu'),
# keras.layers.Convolution2D(128, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Convolution2D(128, 3, 3, padding='same', activation='relu'),
# keras.layers.Convolution2D(128, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Convolution2D(256, 3, 3, padding='same', activation='relu'),
# keras.layers.Convolution2D(256, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Flatten(),
# keras.layers.Dense(256, activation='relu'),
# keras.layers.Dropout(0.5),
# keras.layers.Dense(256, activation='relu'),
# keras.layers.Dropout(0.5),
# keras.layers.Dense(1),
# keras.layers.Activation('sigmoid')
# ])
import tensorflow as tf
class MulticlassTruePositives(tf.keras.metrics.Metric):
def __init__(self, name='multiclass_true_positives', **kwargs):
super(MulticlassTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, 'int32') == tf.cast(y_pred, 'int32')
values = tf.cast(values, 'float32')
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, 'float32')
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.)
from tensorflow import *
def recall(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
all_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (all_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_score(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(loss="binary_crossentropy", optimizer=RMSprop(learning_rate=1e-4), metrics=['accuracy', tensorflow.keras.metrics.categorical_accuracy, tensorflow.keras.metrics.Precision(), tensorflow.keras.metrics.Recall(), tensorflow.keras.metrics.TopKCategoricalAccuracy(k=2)])
import datetime
start=datetime.datetime.now()
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_val, y_val))
print(datetime.datetime.now() - start)
Epoch 1/30
716/716 [==============================] - 10s 11ms/step - loss: 0.6902 - accuracy: 0.5266 - categorical_accuracy: 1.0000 - precision: 0.5251 - recall: 0.5515 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6841 - val_accuracy: 0.5571 - val_categorical_accuracy: 1.0000 - val_precision: 0.6450 - val_recall: 0.2635 - val_top_k_categorical_accuracy: 1.0000
Epoch 2/30
716/716 [==============================] - 8s 10ms/step - loss: 0.6631 - accuracy: 0.6044 - categorical_accuracy: 1.0000 - precision: 0.5961 - recall: 0.6463 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6310 - val_accuracy: 0.6595 - val_categorical_accuracy: 1.0000 - val_precision: 0.6749 - val_recall: 0.6218 - val_top_k_categorical_accuracy: 1.0000
Epoch 3/30
716/716 [==============================] - 7s 10ms/step - loss: 0.6276 - accuracy: 0.6493 - categorical_accuracy: 1.0000 - precision: 0.6471 - recall: 0.6562 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6065 - val_accuracy: 0.6748 - val_categorical_accuracy: 1.0000 - val_precision: 0.7168 - val_recall: 0.5829 - val_top_k_categorical_accuracy: 1.0000
Epoch 4/30
716/716 [==============================] - 7s 10ms/step - loss: 0.6003 - accuracy: 0.6762 - categorical_accuracy: 1.0000 - precision: 0.6807 - recall: 0.6633 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5920 - val_accuracy: 0.6962 - val_categorical_accuracy: 1.0000 - val_precision: 0.6590 - val_recall: 0.8190 - val_top_k_categorical_accuracy: 1.0000
Epoch 5/30
716/716 [==============================] - 8s 10ms/step - loss: 0.5769 - accuracy: 0.6983 - categorical_accuracy: 1.0000 - precision: 0.7105 - recall: 0.6688 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5783 - val_accuracy: 0.6990 - val_categorical_accuracy: 1.0000 - val_precision: 0.7917 - val_recall: 0.5441 - val_top_k_categorical_accuracy: 1.0000
Epoch 6/30
716/716 [==============================] - 7s 10ms/step - loss: 0.5541 - accuracy: 0.7210 - categorical_accuracy: 1.0000 - precision: 0.7322 - recall: 0.6965 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5479 - val_accuracy: 0.7205 - val_categorical_accuracy: 1.0000 - val_precision: 0.6890 - val_recall: 0.8085 - val_top_k_categorical_accuracy: 1.0000
Epoch 7/30
716/716 [==============================] - 7s 10ms/step - loss: 0.5329 - accuracy: 0.7380 - categorical_accuracy: 1.0000 - precision: 0.7512 - recall: 0.7115 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5370 - val_accuracy: 0.7310 - val_categorical_accuracy: 1.0000 - val_precision: 0.6995 - val_recall: 0.8142 - val_top_k_categorical_accuracy: 1.0000
Epoch 8/30
716/716 [==============================] - 8s 10ms/step - loss: 0.5148 - accuracy: 0.7494 - categorical_accuracy: 1.0000 - precision: 0.7663 - recall: 0.7173 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5209 - val_accuracy: 0.7433 - val_categorical_accuracy: 1.0000 - val_precision: 0.7486 - val_recall: 0.7365 - val_top_k_categorical_accuracy: 1.0000
Epoch 9/30
716/716 [==============================] - 7s 10ms/step - loss: 0.4995 - accuracy: 0.7603 - categorical_accuracy: 1.0000 - precision: 0.7787 - recall: 0.7269 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5182 - val_accuracy: 0.7333 - val_categorical_accuracy: 1.0000 - val_precision: 0.7114 - val_recall: 0.7896 - val_top_k_categorical_accuracy: 1.0000
Epoch 10/30
716/716 [==============================] - 7s 10ms/step - loss: 0.4837 - accuracy: 0.7670 - categorical_accuracy: 1.0000 - precision: 0.7804 - recall: 0.7429 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5160 - val_accuracy: 0.7414 - val_categorical_accuracy: 1.0000 - val_precision: 0.7949 - val_recall: 0.6540 - val_top_k_categorical_accuracy: 1.0000
Epoch 11/30
716/716 [==============================] - 8s 11ms/step - loss: 0.4711 - accuracy: 0.7786 - categorical_accuracy: 1.0000 - precision: 0.7878 - recall: 0.7623 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4969 - val_accuracy: 0.7500 - val_categorical_accuracy: 1.0000 - val_precision: 0.7538 - val_recall: 0.7460 - val_top_k_categorical_accuracy: 1.0000
Epoch 12/30
716/716 [==============================] - 8s 11ms/step - loss: 0.4550 - accuracy: 0.7871 - categorical_accuracy: 1.0000 - precision: 0.7970 - recall: 0.7701 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5430 - val_accuracy: 0.7305 - val_categorical_accuracy: 1.0000 - val_precision: 0.8336 - val_recall: 0.5791 - val_top_k_categorical_accuracy: 1.0000
Epoch 13/30
716/716 [==============================] - 8s 11ms/step - loss: 0.4397 - accuracy: 0.7965 - categorical_accuracy: 1.0000 - precision: 0.8027 - recall: 0.7859 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4972 - val_accuracy: 0.7605 - val_categorical_accuracy: 1.0000 - val_precision: 0.7469 - val_recall: 0.7915 - val_top_k_categorical_accuracy: 1.0000
Epoch 14/30
716/716 [==============================] - 8s 10ms/step - loss: 0.4301 - accuracy: 0.8026 - categorical_accuracy: 1.0000 - precision: 0.8093 - recall: 0.7915 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5150 - val_accuracy: 0.7557 - val_categorical_accuracy: 1.0000 - val_precision: 0.7528 - val_recall: 0.7649 - val_top_k_categorical_accuracy: 1.0000
Epoch 15/30
716/716 [==============================] - 7s 10ms/step - loss: 0.4168 - accuracy: 0.8095 - categorical_accuracy: 1.0000 - precision: 0.8228 - recall: 0.7886 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5519 - val_accuracy: 0.7519 - val_categorical_accuracy: 1.0000 - val_precision: 0.7106 - val_recall: 0.8540 - val_top_k_categorical_accuracy: 1.0000
Epoch 16/30
716/716 [==============================] - 7s 10ms/step - loss: 0.4029 - accuracy: 0.8186 - categorical_accuracy: 1.0000 - precision: 0.8234 - recall: 0.8111 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5387 - val_accuracy: 0.7605 - val_categorical_accuracy: 1.0000 - val_precision: 0.7851 - val_recall: 0.7204 - val_top_k_categorical_accuracy: 1.0000
Epoch 17/30
716/716 [==============================] - 7s 10ms/step - loss: 0.3920 - accuracy: 0.8238 - categorical_accuracy: 1.0000 - precision: 0.8332 - recall: 0.8095 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5235 - val_accuracy: 0.7519 - val_categorical_accuracy: 1.0000 - val_precision: 0.7282 - val_recall: 0.8076 - val_top_k_categorical_accuracy: 1.0000
Epoch 18/30
716/716 [==============================] - 7s 10ms/step - loss: 0.3801 - accuracy: 0.8316 - categorical_accuracy: 1.0000 - precision: 0.8406 - recall: 0.8182 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5390 - val_accuracy: 0.7633 - val_categorical_accuracy: 1.0000 - val_precision: 0.8185 - val_recall: 0.6796 - val_top_k_categorical_accuracy: 1.0000
Epoch 19/30
716/716 [==============================] - 7s 10ms/step - loss: 0.3687 - accuracy: 0.8375 - categorical_accuracy: 1.0000 - precision: 0.8511 - recall: 0.8179 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5832 - val_accuracy: 0.7514 - val_categorical_accuracy: 1.0000 - val_precision: 0.8294 - val_recall: 0.6360 - val_top_k_categorical_accuracy: 1.0000
Epoch 20/30
716/716 [==============================] - 7s 10ms/step - loss: 0.3521 - accuracy: 0.8460 - categorical_accuracy: 1.0000 - precision: 0.8550 - recall: 0.8331 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5907 - val_accuracy: 0.7576 - val_categorical_accuracy: 1.0000 - val_precision: 0.7809 - val_recall: 0.7194 - val_top_k_categorical_accuracy: 1.0000
Epoch 21/30
716/716 [==============================] - 7s 10ms/step - loss: 0.3421 - accuracy: 0.8499 - categorical_accuracy: 1.0000 - precision: 0.8533 - recall: 0.8449 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5426 - val_accuracy: 0.7471 - val_categorical_accuracy: 1.0000 - val_precision: 0.7176 - val_recall: 0.8190 - val_top_k_categorical_accuracy: 1.0000
Epoch 22/30
716/716 [==============================] - 7s 10ms/step - loss: 0.3304 - accuracy: 0.8549 - categorical_accuracy: 1.0000 - precision: 0.8616 - recall: 0.8455 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5998 - val_accuracy: 0.7500 - val_categorical_accuracy: 1.0000 - val_precision: 0.8039 - val_recall: 0.6645 - val_top_k_categorical_accuracy: 1.0000
Epoch 23/30
716/716 [==============================] - 7s 10ms/step - loss: 0.3178 - accuracy: 0.8624 - categorical_accuracy: 1.0000 - precision: 0.8679 - recall: 0.8547 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6134 - val_accuracy: 0.7490 - val_categorical_accuracy: 1.0000 - val_precision: 0.8367 - val_recall: 0.6218 - val_top_k_categorical_accuracy: 1.0000
Epoch 24/30
716/716 [==============================] - 7s 10ms/step - loss: 0.3080 - accuracy: 0.8678 - categorical_accuracy: 1.0000 - precision: 0.8804 - recall: 0.8512 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6386 - val_accuracy: 0.7662 - val_categorical_accuracy: 1.0000 - val_precision: 0.7759 - val_recall: 0.7517 - val_top_k_categorical_accuracy: 1.0000
Epoch 25/30
716/716 [==============================] - 7s 10ms/step - loss: 0.2946 - accuracy: 0.8748 - categorical_accuracy: 1.0000 - precision: 0.8881 - recall: 0.8574 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.7480 - val_accuracy: 0.7510 - val_categorical_accuracy: 1.0000 - val_precision: 0.8016 - val_recall: 0.6701 - val_top_k_categorical_accuracy: 1.0000
Epoch 26/30
716/716 [==============================] - 7s 10ms/step - loss: 0.2833 - accuracy: 0.8794 - categorical_accuracy: 1.0000 - precision: 0.8916 - recall: 0.8637 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6460 - val_accuracy: 0.7586 - val_categorical_accuracy: 1.0000 - val_precision: 0.7580 - val_recall: 0.7630 - val_top_k_categorical_accuracy: 1.0000
Epoch 27/30
716/716 [==============================] - 7s 10ms/step - loss: 0.2755 - accuracy: 0.8856 - categorical_accuracy: 1.0000 - precision: 0.8937 - recall: 0.8751 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.8201 - val_accuracy: 0.7362 - val_categorical_accuracy: 1.0000 - val_precision: 0.8275 - val_recall: 0.6000 - val_top_k_categorical_accuracy: 1.0000
Epoch 28/30
716/716 [==============================] - 7s 10ms/step - loss: 0.2604 - accuracy: 0.8897 - categorical_accuracy: 1.0000 - precision: 0.8986 - recall: 0.8785 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.8172 - val_accuracy: 0.7433 - val_categorical_accuracy: 1.0000 - val_precision: 0.7209 - val_recall: 0.7981 - val_top_k_categorical_accuracy: 1.0000
Epoch 29/30
716/716 [==============================] - 7s 10ms/step - loss: 0.2567 - accuracy: 0.8940 - categorical_accuracy: 1.0000 - precision: 0.9046 - recall: 0.8807 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.7212 - val_accuracy: 0.7576 - val_categorical_accuracy: 1.0000 - val_precision: 0.7358 - val_recall: 0.8076 - val_top_k_categorical_accuracy: 1.0000
Epoch 30/30
716/716 [==============================] - 7s 10ms/step - loss: 0.2462 - accuracy: 0.8985 - categorical_accuracy: 1.0000 - precision: 0.9095 - recall: 0.8850 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.9961 - val_accuracy: 0.7557 - val_categorical_accuracy: 1.0000 - val_precision: 0.7340 - val_recall: 0.8057 - val_top_k_categorical_accuracy: 1.0000
datetime.timedelta(seconds=223, microseconds=30330)
pd.DataFrame(history.history)
| loss | accuracy | categorical_accuracy | precision | recall | top_k_categorical_accuracy | val_loss | val_accuracy | val_categorical_accuracy | val_precision | val_recall | val_top_k_categorical_accuracy | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0.690229 | 0.526550 | 1.0 | 0.525081 | 0.551507 | 1.0 | 0.684075 | 0.557143 | 1.0 | 0.645012 | 0.263507 | 1.0 |
| 1 | 0.663075 | 0.604367 | 1.0 | 0.596100 | 0.646308 | 1.0 | 0.630951 | 0.659524 | 1.0 | 0.674897 | 0.621801 | 1.0 |
| 2 | 0.627579 | 0.649301 | 1.0 | 0.647079 | 0.656182 | 1.0 | 0.606488 | 0.674762 | 1.0 | 0.716783 | 0.582938 | 1.0 |
| 3 | 0.600273 | 0.676245 | 1.0 | 0.680714 | 0.663346 | 1.0 | 0.591953 | 0.696190 | 1.0 | 0.659039 | 0.818957 | 1.0 |
| 4 | 0.576855 | 0.698253 | 1.0 | 0.710480 | 0.668764 | 1.0 | 0.578310 | 0.699048 | 1.0 | 0.791724 | 0.544076 | 1.0 |
| 5 | 0.554140 | 0.720961 | 1.0 | 0.732158 | 0.696461 | 1.0 | 0.547934 | 0.720476 | 1.0 | 0.689015 | 0.808531 | 1.0 |
| 6 | 0.532949 | 0.738035 | 1.0 | 0.751199 | 0.711490 | 1.0 | 0.536962 | 0.730952 | 1.0 | 0.699511 | 0.814218 | 1.0 |
| 7 | 0.514789 | 0.749389 | 1.0 | 0.766287 | 0.717344 | 1.0 | 0.520901 | 0.743333 | 1.0 | 0.748555 | 0.736493 | 1.0 |
| 8 | 0.499545 | 0.760262 | 1.0 | 0.778714 | 0.726868 | 1.0 | 0.518175 | 0.733333 | 1.0 | 0.711358 | 0.789573 | 1.0 |
| 9 | 0.483658 | 0.767031 | 1.0 | 0.780378 | 0.742945 | 1.0 | 0.516016 | 0.741429 | 1.0 | 0.794931 | 0.654028 | 1.0 |
| 10 | 0.471060 | 0.778603 | 1.0 | 0.787810 | 0.762342 | 1.0 | 0.496914 | 0.750000 | 1.0 | 0.753831 | 0.745972 | 1.0 |
| 11 | 0.455031 | 0.787074 | 1.0 | 0.796998 | 0.770118 | 1.0 | 0.543040 | 0.730476 | 1.0 | 0.833561 | 0.579147 | 1.0 |
| 12 | 0.439718 | 0.796463 | 1.0 | 0.802695 | 0.785933 | 1.0 | 0.497217 | 0.760476 | 1.0 | 0.746869 | 0.791469 | 1.0 |
| 13 | 0.430106 | 0.802576 | 1.0 | 0.809273 | 0.791525 | 1.0 | 0.514964 | 0.755714 | 1.0 | 0.752798 | 0.764929 | 1.0 |
| 14 | 0.416784 | 0.809476 | 1.0 | 0.822789 | 0.788641 | 1.0 | 0.551940 | 0.751905 | 1.0 | 0.710568 | 0.854028 | 1.0 |
| 15 | 0.402860 | 0.818646 | 1.0 | 0.823399 | 0.811097 | 1.0 | 0.538733 | 0.760476 | 1.0 | 0.785124 | 0.720379 | 1.0 |
| 16 | 0.391986 | 0.823799 | 1.0 | 0.833183 | 0.809524 | 1.0 | 0.523489 | 0.751905 | 1.0 | 0.728205 | 0.807583 | 1.0 |
| 17 | 0.380065 | 0.831572 | 1.0 | 0.840575 | 0.818174 | 1.0 | 0.539029 | 0.763333 | 1.0 | 0.818493 | 0.679621 | 1.0 |
| 18 | 0.368673 | 0.837467 | 1.0 | 0.851077 | 0.817912 | 1.0 | 0.583162 | 0.751429 | 1.0 | 0.829419 | 0.636019 | 1.0 |
| 19 | 0.352136 | 0.845983 | 1.0 | 0.855004 | 0.833115 | 1.0 | 0.590740 | 0.757619 | 1.0 | 0.780864 | 0.719431 | 1.0 |
| 20 | 0.342132 | 0.849869 | 1.0 | 0.853260 | 0.844910 | 1.0 | 0.542552 | 0.747143 | 1.0 | 0.717608 | 0.818957 | 1.0 |
| 21 | 0.330410 | 0.854891 | 1.0 | 0.861556 | 0.845522 | 1.0 | 0.599773 | 0.750000 | 1.0 | 0.803899 | 0.664455 | 1.0 |
| 22 | 0.317804 | 0.862358 | 1.0 | 0.867891 | 0.854696 | 1.0 | 0.613357 | 0.749048 | 1.0 | 0.836735 | 0.621801 | 1.0 |
| 23 | 0.307979 | 0.867817 | 1.0 | 0.880354 | 0.851201 | 1.0 | 0.638583 | 0.766190 | 1.0 | 0.775930 | 0.751659 | 1.0 |
| 24 | 0.294566 | 0.874760 | 1.0 | 0.888135 | 0.857405 | 1.0 | 0.748001 | 0.750952 | 1.0 | 0.801587 | 0.670142 | 1.0 |
| 25 | 0.283282 | 0.879389 | 1.0 | 0.891585 | 0.863696 | 1.0 | 0.646040 | 0.758571 | 1.0 | 0.758004 | 0.763033 | 1.0 |
| 26 | 0.275513 | 0.885590 | 1.0 | 0.893727 | 0.875142 | 1.0 | 0.820108 | 0.736190 | 1.0 | 0.827451 | 0.600000 | 1.0 |
| 27 | 0.260448 | 0.889694 | 1.0 | 0.898561 | 0.878462 | 1.0 | 0.817174 | 0.743333 | 1.0 | 0.720890 | 0.798104 | 1.0 |
| 28 | 0.256672 | 0.893974 | 1.0 | 0.904604 | 0.880734 | 1.0 | 0.721220 | 0.757619 | 1.0 | 0.735751 | 0.807583 | 1.0 |
| 29 | 0.246209 | 0.898515 | 1.0 | 0.909491 | 0.885015 | 1.0 | 0.996128 | 0.755714 | 1.0 | 0.734024 | 0.805687 | 1.0 |
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(20,20))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
model.save('/content/model2')
INFO:tensorflow:Assets written to: /content/model2/assets
del X_train
del X_val
jpg_list_test=os.listdir('/content/test')
x_test = [] # images as arrays
end1=0
for image in jpg_list_test:
x_test.append(cv2.resize(cv2.imread('/content/test/{}'.format(image)), (150,150), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_test))),2))
Images Processed: 12500
Percent Complete: 100
x_test=x_test/255
y_pred=model.predict(x_test, verbose=0)
y_pred
array([[5.3207964e-07],
[4.9001147e-04],
[5.4316002e-01],
...,
[2.9817612e-03],
[6.9274073e-23],
[9.6410733e-01]], dtype=float32)
y_pred
array([[5.3207964e-07],
[4.9001147e-04],
[5.4316002e-01],
...,
[2.9817612e-03],
[6.9274073e-23],
[9.6410733e-01]], dtype=float32)
import matplotlib.pyplot as plt
image = Image.open('/content/test/{}'.format(jpg_list_test[0]))
plt.imshow(image)
<matplotlib.image.AxesImage at 0x7f53a0a6ecd0>
images=y_pred[0:36].astype(float)
images=np.around(images, decimals=4)
labeldgct=jpg_list_test[0:36]
im = Image.open("/content/test/9981.jpg")
plt.matshow(im)
<matplotlib.image.AxesImage at 0x7f53a0943890>
print(images[1])
array([0.0005])
fig, axs = plt.subplots(6, 6, figsize = (40, 40))
plt.gray()
for i, ax in enumerate(axs.flat):
i=500+i
ax.set_title('0-1, Cat-Dog: {}'.format(np.around(y_pred[i].astype(float), decimals=4)))
ax.matshow(Image.open('/content/test/{}'.format(jpg_list_test[i])))
ax.axis('off')
submission_df=pd.read_csv('/content/sample_submission.csv')
submission_df['label']=y_pred
submission_df.to_csv('/content/submission.csv', index=False)
submission_df
| id | label | |
|---|---|---|
| 0 | 1 | 5.320796e-07 |
| 1 | 2 | 4.900115e-04 |
| 2 | 3 | 5.431600e-01 |
| 3 | 4 | 6.591790e-01 |
| 4 | 5 | 8.257716e-01 |
| ... | ... | ... |
| 12495 | 12496 | 1.268503e-01 |
| 12496 | 12497 | 3.085707e-01 |
| 12497 | 12498 | 2.981761e-03 |
| 12498 | 12499 | 6.927407e-23 |
| 12499 | 12500 | 9.641073e-01 |
12500 rows × 2 columns
# from keras.utils.vis_utils import plot_model
# plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
!kaggle competitions submit -c dogs-vs-cats-redux-kernels-edition -f /content/submission.csv -m 'CNN3'
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
100% 194k/194k [00:01<00:00, 116kB/s]
Successfully submitted to Dogs vs. Cats Redux: Kernels Edition
3
!kaggle competitions submissions -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
fileName date description status publicScore privateScore
--------------------- ------------------- ----------- -------- ----------- ------------
submission.csv 2021-06-09 20:42:45 CNN3 complete 3.71867 3.71867
submission.csv 2021-06-08 02:44:00 CNN2 complete 6.42962 6.42962
sample_submission.csv 2021-06-08 02:11:27 CNN1 complete 0.69314 0.69314
from google.colab import files
files.upload()
Saving kaggle.json to kaggle.json
{'kaggle.json': b'{"username":"michaelrocchio","key":"fcb9d1568595e76eab4ba8e2b41f9ff4"}'}
!mv /content/kaggle.json /root/.kaggle/kaggle.json
ls /root/.kaggle/
kaggle.json
import kaggle
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
!kaggle competitions download -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
test.zip: Skipping, found more recently modified local copy (use --force to force download)
train.zip: Skipping, found more recently modified local copy (use --force to force download)
sample_submission.csv: Skipping, found more recently modified local copy (use --force to force download)
import zipfile
from zipfile import *
zip_test = ZipFile('/content/test.zip')
zip_test.extractall()
zip_train = ZipFile('/content/train.zip')
zip_train.extractall()
import os
import pandas as pd
import numpy as np
jpg_list=os.listdir('/content/train')
labels_list=pd.DataFrame({
'label': jpg_list,
'filename': jpg_list
})
labels_list['label']=labels_list['label'].str[0:3]
labels_list['y']=np.where(labels_list['label']=='dog',1,0)
labels_list
| label | filename | y | |
|---|---|---|---|
| 0 | cat | cat.9673.jpg | 0 |
| 1 | cat | cat.3257.jpg | 0 |
| 2 | dog | dog.11897.jpg | 1 |
| 3 | dog | dog.8137.jpg | 1 |
| 4 | dog | dog.2316.jpg | 1 |
| ... | ... | ... | ... |
| 24995 | dog | dog.12438.jpg | 1 |
| 24996 | dog | dog.10947.jpg | 1 |
| 24997 | dog | dog.6583.jpg | 1 |
| 24998 | cat | cat.4690.jpg | 0 |
| 24999 | cat | cat.1629.jpg | 0 |
25000 rows × 3 columns
import sys
from PIL import Image
import time
import glob
from IPython.display import clear_output
import cv2
from numpy import asarray
jpg_list_train=jpg_list
x = [] # images as arrays
y = [] # labels
end1=0
for image in jpg_list_train:
x.append(cv2.resize(cv2.imread('/content/train/{}'.format(image)), (150,150), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_train))),2))
if 'dog' in image:
y.append(1)
elif 'cat' in image:
y.append(0)
Images Processed: 25000
Percent Complete: 100.0
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array, load_img
from keras import layers, models, optimizers
from keras import backend as K
from numpy import asarray
x=asarray(x, dtype='float32')
y=asarray(y, dtype='float32')
# x=np.transpose(x, (2, 1, 3, 0))
x=x/255
x.shape
(25000, 150, 150, 3)
y.shape
(25000,)
X_train = x[2100:,:,:,:]
X_val = x[:2100,:,:,:]
y_train = y[2100:]
y_val = y[:2100]
X_train.shape
del x
del y
X_train.shape
(22900, 150, 150, 3)
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
import tensorflow
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
model=keras.models.Sequential()
model.add(keras.layers.Conv2D(256,3,3,padding = 'same', activation ='relu', input_shape=(150,150,3)))
model.add(keras.layers.Conv2D(256,3,3,padding = "same", activation = "relu"))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), padding = "same"))
model.add(keras.layers.Conv2D(128,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.Conv2D(128,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), padding = "same"))
model.add(keras.layers.Conv2D(64,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.Conv2D(64,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), padding = "same"))
model.add(keras.layers.Conv2D(32,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.Conv2D(32,3,3,padding = 'same', activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), padding = "same"))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(256,activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(256,activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(1))
model.add(keras.layers.Activation('sigmoid'))
model.summary()
keras.backend.clear_session()
# model = Sequential([
# keras.layers.Convolution2D(32, 3, 3, input_shape=(150, 150, 3), activation='relu'),
# keras.layers.Convolution2D(32, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Convolution2D(64, 3, 3, padding='same', activation='relu'),
# keras.layers.Convolution2D(64, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Convolution2D(128, 3, 3, padding='same', activation='relu'),
# keras.layers.Convolution2D(128, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Convolution2D(128, 3, 3, padding='same', activation='relu'),
# keras.layers.Convolution2D(128, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Convolution2D(256, 3, 3, padding='same', activation='relu'),
# keras.layers.Convolution2D(256, 3, 3, padding='same', activation='relu'),
# keras.layers.MaxPooling2D(padding='same', pool_size=(2, 2)),
# keras.layers.Flatten(),
# keras.layers.Dense(256, activation='relu'),
# keras.layers.Dropout(0.5),
# keras.layers.Dense(256, activation='relu'),
# keras.layers.Dropout(0.5),
# keras.layers.Dense(1),
# keras.layers.Activation('sigmoid')
# ])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 50, 50, 256) 7168
_________________________________________________________________
conv2d_1 (Conv2D) (None, 17, 17, 256) 590080
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 9, 9, 256) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 3, 3, 128) 295040
_________________________________________________________________
conv2d_3 (Conv2D) (None, 1, 1, 128) 147584
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 1, 1, 128) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 1, 1, 64) 73792
_________________________________________________________________
conv2d_5 (Conv2D) (None, 1, 1, 64) 36928
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 1, 1, 64) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 1, 1, 32) 18464
_________________________________________________________________
conv2d_7 (Conv2D) (None, 1, 1, 32) 9248
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 1, 1, 32) 0
_________________________________________________________________
flatten (Flatten) (None, 32) 0
_________________________________________________________________
dense (Dense) (None, 256) 8448
_________________________________________________________________
dropout (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 65792
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 257
_________________________________________________________________
activation (Activation) (None, 1) 0
=================================================================
Total params: 1,252,801
Trainable params: 1,252,801
Non-trainable params: 0
_________________________________________________________________
import tensorflow as tf
class MulticlassTruePositives(tf.keras.metrics.Metric):
def __init__(self, name='multiclass_true_positives', **kwargs):
super(MulticlassTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, 'int32') == tf.cast(y_pred, 'int32')
values = tf.cast(values, 'float32')
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, 'float32')
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.)
from tensorflow import *
def recall(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
all_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (all_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_score(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(loss="binary_crossentropy", optimizer=RMSprop(learning_rate=1e-4), metrics=['accuracy', tensorflow.keras.metrics.categorical_accuracy, tensorflow.keras.metrics.Precision(), tensorflow.keras.metrics.Recall(), tensorflow.keras.metrics.TopKCategoricalAccuracy(k=2)])
import datetime
start=datetime.datetime.now()
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_val, y_val))
print(datetime.datetime.now() - start)
Epoch 1/30
716/716 [==============================] - 20s 21ms/step - loss: 0.6883 - accuracy: 0.5400 - categorical_accuracy: 1.0000 - precision: 0.5474 - recall: 0.4592 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6718 - val_accuracy: 0.5824 - val_categorical_accuracy: 1.0000 - val_precision: 0.6254 - val_recall: 0.4209 - val_top_k_categorical_accuracy: 1.0000
Epoch 2/30
716/716 [==============================] - 15s 21ms/step - loss: 0.6600 - accuracy: 0.6087 - categorical_accuracy: 1.0000 - precision: 0.5992 - recall: 0.6553 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6336 - val_accuracy: 0.6471 - val_categorical_accuracy: 1.0000 - val_precision: 0.7055 - val_recall: 0.5109 - val_top_k_categorical_accuracy: 1.0000
Epoch 3/30
716/716 [==============================] - 15s 21ms/step - loss: 0.6083 - accuracy: 0.6746 - categorical_accuracy: 1.0000 - precision: 0.6767 - recall: 0.6680 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5665 - val_accuracy: 0.7105 - val_categorical_accuracy: 1.0000 - val_precision: 0.7260 - val_recall: 0.6806 - val_top_k_categorical_accuracy: 1.0000
Epoch 4/30
716/716 [==============================] - 15s 21ms/step - loss: 0.5666 - accuracy: 0.7126 - categorical_accuracy: 1.0000 - precision: 0.7295 - recall: 0.6753 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5455 - val_accuracy: 0.7262 - val_categorical_accuracy: 1.0000 - val_precision: 0.7963 - val_recall: 0.6114 - val_top_k_categorical_accuracy: 1.0000
Epoch 5/30
716/716 [==============================] - 15s 21ms/step - loss: 0.5307 - accuracy: 0.7404 - categorical_accuracy: 1.0000 - precision: 0.7598 - recall: 0.7028 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5094 - val_accuracy: 0.7595 - val_categorical_accuracy: 1.0000 - val_precision: 0.7665 - val_recall: 0.7498 - val_top_k_categorical_accuracy: 1.0000
Epoch 6/30
716/716 [==============================] - 15s 20ms/step - loss: 0.4958 - accuracy: 0.7664 - categorical_accuracy: 1.0000 - precision: 0.7804 - recall: 0.7412 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4817 - val_accuracy: 0.7710 - val_categorical_accuracy: 1.0000 - val_precision: 0.8060 - val_recall: 0.7166 - val_top_k_categorical_accuracy: 1.0000
Epoch 7/30
716/716 [==============================] - 15s 21ms/step - loss: 0.4664 - accuracy: 0.7847 - categorical_accuracy: 1.0000 - precision: 0.7966 - recall: 0.7644 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4861 - val_accuracy: 0.7619 - val_categorical_accuracy: 1.0000 - val_precision: 0.7894 - val_recall: 0.7175 - val_top_k_categorical_accuracy: 1.0000
Epoch 8/30
716/716 [==============================] - 15s 21ms/step - loss: 0.4376 - accuracy: 0.8012 - categorical_accuracy: 1.0000 - precision: 0.8111 - recall: 0.7851 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4611 - val_accuracy: 0.7852 - val_categorical_accuracy: 1.0000 - val_precision: 0.8044 - val_recall: 0.7564 - val_top_k_categorical_accuracy: 1.0000
Epoch 9/30
716/716 [==============================] - 15s 21ms/step - loss: 0.4093 - accuracy: 0.8182 - categorical_accuracy: 1.0000 - precision: 0.8300 - recall: 0.8001 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4522 - val_accuracy: 0.7905 - val_categorical_accuracy: 1.0000 - val_precision: 0.7819 - val_recall: 0.8085 - val_top_k_categorical_accuracy: 1.0000
Epoch 10/30
716/716 [==============================] - 15s 21ms/step - loss: 0.3825 - accuracy: 0.8328 - categorical_accuracy: 1.0000 - precision: 0.8418 - recall: 0.8194 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4659 - val_accuracy: 0.7857 - val_categorical_accuracy: 1.0000 - val_precision: 0.7679 - val_recall: 0.8218 - val_top_k_categorical_accuracy: 1.0000
Epoch 11/30
716/716 [==============================] - 15s 20ms/step - loss: 0.3489 - accuracy: 0.8517 - categorical_accuracy: 1.0000 - precision: 0.8603 - recall: 0.8397 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4419 - val_accuracy: 0.7914 - val_categorical_accuracy: 1.0000 - val_precision: 0.8010 - val_recall: 0.7782 - val_top_k_categorical_accuracy: 1.0000
Epoch 12/30
716/716 [==============================] - 15s 21ms/step - loss: 0.3170 - accuracy: 0.8686 - categorical_accuracy: 1.0000 - precision: 0.8710 - recall: 0.8652 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4729 - val_accuracy: 0.7952 - val_categorical_accuracy: 1.0000 - val_precision: 0.8630 - val_recall: 0.7043 - val_top_k_categorical_accuracy: 1.0000
Epoch 13/30
716/716 [==============================] - 15s 21ms/step - loss: 0.2821 - accuracy: 0.8861 - categorical_accuracy: 1.0000 - precision: 0.8906 - recall: 0.8801 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4795 - val_accuracy: 0.7890 - val_categorical_accuracy: 1.0000 - val_precision: 0.8248 - val_recall: 0.7365 - val_top_k_categorical_accuracy: 1.0000
Epoch 14/30
716/716 [==============================] - 15s 21ms/step - loss: 0.2458 - accuracy: 0.9023 - categorical_accuracy: 1.0000 - precision: 0.9051 - recall: 0.8988 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6989 - val_accuracy: 0.7619 - val_categorical_accuracy: 1.0000 - val_precision: 0.8903 - val_recall: 0.6000 - val_top_k_categorical_accuracy: 1.0000
Epoch 15/30
716/716 [==============================] - 15s 21ms/step - loss: 0.2165 - accuracy: 0.9145 - categorical_accuracy: 1.0000 - precision: 0.9178 - recall: 0.9106 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6304 - val_accuracy: 0.7724 - val_categorical_accuracy: 1.0000 - val_precision: 0.8557 - val_recall: 0.6578 - val_top_k_categorical_accuracy: 1.0000
Epoch 16/30
716/716 [==============================] - 15s 21ms/step - loss: 0.1808 - accuracy: 0.9303 - categorical_accuracy: 1.0000 - precision: 0.9317 - recall: 0.9285 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.8274 - val_accuracy: 0.7381 - val_categorical_accuracy: 1.0000 - val_precision: 0.6762 - val_recall: 0.9185 - val_top_k_categorical_accuracy: 1.0000
Epoch 17/30
716/716 [==============================] - 15s 21ms/step - loss: 0.1550 - accuracy: 0.9408 - categorical_accuracy: 1.0000 - precision: 0.9411 - recall: 0.9404 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6956 - val_accuracy: 0.7843 - val_categorical_accuracy: 1.0000 - val_precision: 0.8444 - val_recall: 0.6995 - val_top_k_categorical_accuracy: 1.0000
Epoch 18/30
716/716 [==============================] - 15s 21ms/step - loss: 0.1307 - accuracy: 0.9524 - categorical_accuracy: 1.0000 - precision: 0.9536 - recall: 0.9510 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6685 - val_accuracy: 0.7890 - val_categorical_accuracy: 1.0000 - val_precision: 0.7914 - val_recall: 0.7877 - val_top_k_categorical_accuracy: 1.0000
Epoch 19/30
716/716 [==============================] - 15s 21ms/step - loss: 0.1082 - accuracy: 0.9609 - categorical_accuracy: 1.0000 - precision: 0.9603 - recall: 0.9616 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.1708 - val_accuracy: 0.7671 - val_categorical_accuracy: 1.0000 - val_precision: 0.8734 - val_recall: 0.6275 - val_top_k_categorical_accuracy: 1.0000
Epoch 20/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0910 - accuracy: 0.9665 - categorical_accuracy: 1.0000 - precision: 0.9683 - recall: 0.9646 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.8846 - val_accuracy: 0.7971 - val_categorical_accuracy: 1.0000 - val_precision: 0.7806 - val_recall: 0.8294 - val_top_k_categorical_accuracy: 1.0000
Epoch 21/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0769 - accuracy: 0.9721 - categorical_accuracy: 1.0000 - precision: 0.9719 - recall: 0.9724 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.8888 - val_accuracy: 0.7967 - val_categorical_accuracy: 1.0000 - val_precision: 0.7946 - val_recall: 0.8028 - val_top_k_categorical_accuracy: 1.0000
Epoch 22/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0676 - accuracy: 0.9768 - categorical_accuracy: 1.0000 - precision: 0.9762 - recall: 0.9775 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.1557 - val_accuracy: 0.7833 - val_categorical_accuracy: 1.0000 - val_precision: 0.8132 - val_recall: 0.7384 - val_top_k_categorical_accuracy: 1.0000
Epoch 23/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0626 - accuracy: 0.9779 - categorical_accuracy: 1.0000 - precision: 0.9781 - recall: 0.9777 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.0792 - val_accuracy: 0.7981 - val_categorical_accuracy: 1.0000 - val_precision: 0.7876 - val_recall: 0.8190 - val_top_k_categorical_accuracy: 1.0000
Epoch 24/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0564 - accuracy: 0.9811 - categorical_accuracy: 1.0000 - precision: 0.9815 - recall: 0.9807 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.2366 - val_accuracy: 0.7995 - val_categorical_accuracy: 1.0000 - val_precision: 0.7968 - val_recall: 0.8066 - val_top_k_categorical_accuracy: 1.0000
Epoch 25/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0509 - accuracy: 0.9833 - categorical_accuracy: 1.0000 - precision: 0.9824 - recall: 0.9843 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.2271 - val_accuracy: 0.7895 - val_categorical_accuracy: 1.0000 - val_precision: 0.7967 - val_recall: 0.7801 - val_top_k_categorical_accuracy: 1.0000
Epoch 26/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0442 - accuracy: 0.9846 - categorical_accuracy: 1.0000 - precision: 0.9837 - recall: 0.9856 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.4290 - val_accuracy: 0.7619 - val_categorical_accuracy: 1.0000 - val_precision: 0.7166 - val_recall: 0.8701 - val_top_k_categorical_accuracy: 1.0000
Epoch 27/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0425 - accuracy: 0.9858 - categorical_accuracy: 1.0000 - precision: 0.9865 - recall: 0.9850 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.6310 - val_accuracy: 0.8005 - val_categorical_accuracy: 1.0000 - val_precision: 0.8502 - val_recall: 0.7318 - val_top_k_categorical_accuracy: 1.0000
Epoch 28/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0396 - accuracy: 0.9870 - categorical_accuracy: 1.0000 - precision: 0.9882 - recall: 0.9858 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.9875 - val_accuracy: 0.7976 - val_categorical_accuracy: 1.0000 - val_precision: 0.8029 - val_recall: 0.7915 - val_top_k_categorical_accuracy: 1.0000
Epoch 29/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0377 - accuracy: 0.9871 - categorical_accuracy: 1.0000 - precision: 0.9858 - recall: 0.9884 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.8500 - val_accuracy: 0.7943 - val_categorical_accuracy: 1.0000 - val_precision: 0.8106 - val_recall: 0.7706 - val_top_k_categorical_accuracy: 1.0000
Epoch 30/30
716/716 [==============================] - 15s 21ms/step - loss: 0.0374 - accuracy: 0.9877 - categorical_accuracy: 1.0000 - precision: 0.9885 - recall: 0.9869 - top_k_categorical_accuracy: 1.0000 - val_loss: 1.8471 - val_accuracy: 0.7957 - val_categorical_accuracy: 1.0000 - val_precision: 0.7925 - val_recall: 0.8038 - val_top_k_categorical_accuracy: 1.0000
datetime.timedelta(seconds=446, microseconds=444122)
pd.DataFrame(history.history)
| loss | accuracy | categorical_accuracy | precision | recall | top_k_categorical_accuracy | val_loss | val_accuracy | val_categorical_accuracy | val_precision | val_recall | val_top_k_categorical_accuracy | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0.688340 | 0.539956 | 1.0 | 0.547396 | 0.459152 | 1.0 | 0.671797 | 0.582381 | 1.0 | 0.625352 | 0.420853 | 1.0 |
| 1 | 0.659994 | 0.608690 | 1.0 | 0.599233 | 0.655308 | 1.0 | 0.633621 | 0.647143 | 1.0 | 0.705497 | 0.510900 | 1.0 |
| 2 | 0.608276 | 0.674585 | 1.0 | 0.676728 | 0.667977 | 1.0 | 0.566482 | 0.710476 | 1.0 | 0.725986 | 0.680569 | 1.0 |
| 3 | 0.566555 | 0.712576 | 1.0 | 0.729495 | 0.675317 | 1.0 | 0.545475 | 0.726190 | 1.0 | 0.796296 | 0.611374 | 1.0 |
| 4 | 0.530658 | 0.740437 | 1.0 | 0.759800 | 0.702840 | 1.0 | 0.509378 | 0.759524 | 1.0 | 0.766473 | 0.749763 | 1.0 |
| 5 | 0.495842 | 0.766419 | 1.0 | 0.780405 | 0.741197 | 1.0 | 0.481726 | 0.770952 | 1.0 | 0.805970 | 0.716588 | 1.0 |
| 6 | 0.466358 | 0.784672 | 1.0 | 0.796576 | 0.764351 | 1.0 | 0.486139 | 0.761905 | 1.0 | 0.789364 | 0.717536 | 1.0 |
| 7 | 0.437574 | 0.801223 | 1.0 | 0.811140 | 0.785059 | 1.0 | 0.461124 | 0.785238 | 1.0 | 0.804435 | 0.756398 | 1.0 |
| 8 | 0.409347 | 0.818166 | 1.0 | 0.829965 | 0.800087 | 1.0 | 0.452173 | 0.790476 | 1.0 | 0.781852 | 0.808531 | 1.0 |
| 9 | 0.382473 | 0.832795 | 1.0 | 0.841831 | 0.819397 | 1.0 | 0.465949 | 0.785714 | 1.0 | 0.767936 | 0.821801 | 1.0 |
| 10 | 0.348850 | 0.851703 | 1.0 | 0.860263 | 0.839668 | 1.0 | 0.441941 | 0.791429 | 1.0 | 0.800976 | 0.778199 | 1.0 |
| 11 | 0.316965 | 0.868603 | 1.0 | 0.871042 | 0.865181 | 1.0 | 0.472914 | 0.795238 | 1.0 | 0.862950 | 0.704265 | 1.0 |
| 12 | 0.282087 | 0.886070 | 1.0 | 0.890628 | 0.880122 | 1.0 | 0.479465 | 0.789048 | 1.0 | 0.824841 | 0.736493 | 1.0 |
| 13 | 0.245782 | 0.902314 | 1.0 | 0.905068 | 0.898820 | 1.0 | 0.698850 | 0.761905 | 1.0 | 0.890295 | 0.600000 | 1.0 |
| 14 | 0.216530 | 0.914541 | 1.0 | 0.917753 | 0.910616 | 1.0 | 0.630444 | 0.772381 | 1.0 | 0.855734 | 0.657820 | 1.0 |
| 15 | 0.180814 | 0.930262 | 1.0 | 0.931703 | 0.928528 | 1.0 | 0.827396 | 0.738095 | 1.0 | 0.676204 | 0.918483 | 1.0 |
| 16 | 0.154973 | 0.940786 | 1.0 | 0.941068 | 0.940411 | 1.0 | 0.695623 | 0.784286 | 1.0 | 0.844394 | 0.699526 | 1.0 |
| 17 | 0.130734 | 0.952402 | 1.0 | 0.953649 | 0.950983 | 1.0 | 0.668503 | 0.789048 | 1.0 | 0.791429 | 0.787678 | 1.0 |
| 18 | 0.108183 | 0.960917 | 1.0 | 0.960297 | 0.961555 | 1.0 | 1.170828 | 0.767143 | 1.0 | 0.873351 | 0.627488 | 1.0 |
| 19 | 0.090955 | 0.966507 | 1.0 | 0.968251 | 0.964613 | 1.0 | 0.884643 | 0.797143 | 1.0 | 0.780553 | 0.829384 | 1.0 |
| 20 | 0.076889 | 0.972140 | 1.0 | 0.971880 | 0.972390 | 1.0 | 0.888777 | 0.796667 | 1.0 | 0.794559 | 0.802844 | 1.0 |
| 21 | 0.067647 | 0.976812 | 1.0 | 0.976178 | 0.977457 | 1.0 | 1.155740 | 0.783333 | 1.0 | 0.813152 | 0.738389 | 1.0 |
| 22 | 0.062623 | 0.977948 | 1.0 | 0.978147 | 0.977720 | 1.0 | 1.079189 | 0.798095 | 1.0 | 0.787603 | 0.818957 | 1.0 |
| 23 | 0.056402 | 0.981135 | 1.0 | 0.981548 | 0.980690 | 1.0 | 1.236558 | 0.799524 | 1.0 | 0.796816 | 0.806635 | 1.0 |
| 24 | 0.050939 | 0.983319 | 1.0 | 0.982384 | 0.984273 | 1.0 | 1.227136 | 0.789524 | 1.0 | 0.796709 | 0.780095 | 1.0 |
| 25 | 0.044210 | 0.984629 | 1.0 | 0.983692 | 0.985583 | 1.0 | 1.429015 | 0.761905 | 1.0 | 0.716628 | 0.870142 | 1.0 |
| 26 | 0.042480 | 0.985764 | 1.0 | 0.986523 | 0.984972 | 1.0 | 1.630978 | 0.800476 | 1.0 | 0.850220 | 0.731754 | 1.0 |
| 27 | 0.039579 | 0.986987 | 1.0 | 0.988176 | 0.985758 | 1.0 | 1.987541 | 0.797619 | 1.0 | 0.802885 | 0.791469 | 1.0 |
| 28 | 0.037709 | 0.987074 | 1.0 | 0.985795 | 0.988379 | 1.0 | 1.849976 | 0.794286 | 1.0 | 0.810568 | 0.770616 | 1.0 |
| 29 | 0.037374 | 0.987729 | 1.0 | 0.988535 | 0.986894 | 1.0 | 1.847111 | 0.795714 | 1.0 | 0.792523 | 0.803791 | 1.0 |
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(20,20))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
model.save('/content/model2')
INFO:tensorflow:Assets written to: /content/model2/assets
del X_train
del X_val
jpg_list_test=os.listdir('/content/test')
x_test = [] # images as arrays
end1=0
for image in jpg_list_test:
x_test.append(cv2.resize(cv2.imread('/content/test/{}'.format(image)), (150,150), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_test))),2))
Images Processed: 12500
Percent Complete: 100
x_test=x_test/255
y_pred=model.predict(x_test, verbose=0)
y_pred
array([[2.3269310e-13],
[1.0542270e-13],
[9.9298227e-01],
...,
[9.6636587e-21],
[0.0000000e+00],
[1.3332120e-04]], dtype=float32)
y_pred
array([[2.3269310e-13],
[1.0542270e-13],
[9.9298227e-01],
...,
[9.6636587e-21],
[0.0000000e+00],
[1.3332120e-04]], dtype=float32)
y_pred.min()
0.0
y_pred.max()
1.0
import matplotlib.pyplot as plt
image = Image.open('/content/test/{}'.format(jpg_list_test[0]))
plt.imshow(image)
<matplotlib.image.AxesImage at 0x7f63fc2b36d0>
images=y_pred[0:36].astype(float)
images=np.around(images, decimals=4)
labeldgct=jpg_list_test[0:36]
im = Image.open("/content/test/9981.jpg")
plt.matshow(im)
<matplotlib.image.AxesImage at 0x7f63fc38c450>
print(images[1])
array([0.])
fig, axs = plt.subplots(6, 6, figsize = (40, 40))
plt.gray()
for i, ax in enumerate(axs.flat):
i=500+i
ax.set_title('0-1, Cat-Dog: {}'.format(np.around(y_pred[i].astype(float), decimals=4)))
ax.matshow(Image.open('/content/test/{}'.format(jpg_list_test[i])))
ax.axis('off')
submission_df=pd.read_csv('/content/sample_submission.csv')
submission_df['label']=y_pred
submission_df.to_csv('/content/submission.csv', index=False)
submission_df
| id | label | |
|---|---|---|
| 0 | 1 | 2.326931e-13 |
| 1 | 2 | 1.054227e-13 |
| 2 | 3 | 9.929823e-01 |
| 3 | 4 | 2.007465e-02 |
| 4 | 5 | 9.999650e-01 |
| ... | ... | ... |
| 12495 | 12496 | 9.895577e-01 |
| 12496 | 12497 | 5.189775e-05 |
| 12497 | 12498 | 9.663659e-21 |
| 12498 | 12499 | 0.000000e+00 |
| 12499 | 12500 | 1.333212e-04 |
12500 rows × 2 columns
# from keras.utils.vis_utils import plot_model
# plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
!kaggle competitions submit -c dogs-vs-cats-redux-kernels-edition -f /content/submission.csv -m 'CNN3'
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
100% 198k/198k [00:02<00:00, 82.1kB/s]
Successfully submitted to Dogs vs. Cats Redux: Kernels Edition
3
!kaggle competitions submissions -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
fileName date description status publicScore privateScore
--------------------- ------------------- ----------- -------- ----------- ------------
submission.csv 2021-06-09 22:06:16 CNN3 complete 7.65715 7.65715
submission.csv 2021-06-09 20:42:45 CNN3 complete 3.71867 3.71867
submission.csv 2021-06-08 02:44:00 CNN2 complete 6.42962 6.42962
sample_submission.csv 2021-06-08 02:11:27 CNN1 complete 0.69314 0.69314
from google.colab import files
files.upload()
Saving kaggle.json to kaggle.json
{'kaggle.json': b'{"username":"michaelrocchio","key":"fcb9d1568595e76eab4ba8e2b41f9ff4"}'}
!mv /content/kaggle.json /root/.kaggle/kaggle.json
ls /root/.kaggle/
kaggle.json
import kaggle
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
!kaggle competitions download -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
test.zip: Skipping, found more recently modified local copy (use --force to force download)
train.zip: Skipping, found more recently modified local copy (use --force to force download)
sample_submission.csv: Skipping, found more recently modified local copy (use --force to force download)
import zipfile
from zipfile import *
zip_test = ZipFile('/content/test.zip')
zip_test.extractall()
zip_train = ZipFile('/content/train.zip')
zip_train.extractall()
import os
import pandas as pd
import numpy as np
jpg_list=os.listdir('/content/train')
labels_list=pd.DataFrame({
'label': jpg_list,
'filename': jpg_list
})
labels_list['label']=labels_list['label'].str[0:3]
labels_list['y']=np.where(labels_list['label']=='dog',1,0)
labels_list
| label | filename | y | |
|---|---|---|---|
| 0 | cat | cat.9673.jpg | 0 |
| 1 | cat | cat.3257.jpg | 0 |
| 2 | dog | dog.11897.jpg | 1 |
| 3 | dog | dog.8137.jpg | 1 |
| 4 | dog | dog.2316.jpg | 1 |
| ... | ... | ... | ... |
| 24995 | dog | dog.12438.jpg | 1 |
| 24996 | dog | dog.10947.jpg | 1 |
| 24997 | dog | dog.6583.jpg | 1 |
| 24998 | cat | cat.4690.jpg | 0 |
| 24999 | cat | cat.1629.jpg | 0 |
25000 rows × 3 columns
import sys
from PIL import Image
import time
import glob
from IPython.display import clear_output
import cv2
from numpy import asarray
jpg_list_train=jpg_list
x = [] # images as arrays
y = [] # labels
end1=0
for image in jpg_list_train:
x.append(cv2.resize(cv2.imread('/content/train/{}'.format(image)), (200,200), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_train))),2))
if 'dog' in image:
y.append(1)
elif 'cat' in image:
y.append(0)
Images Processed: 25000
Percent Complete: 100.0
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array, load_img
from keras import layers, models, optimizers
from keras import backend as K
from numpy import asarray
x=asarray(x, dtype='float32')
y=asarray(y, dtype='float32')
# x=np.transpose(x, (2, 1, 3, 0))
x.shape
(25000, 200, 200, 3)
x=x/255
y.shape
(25000,)
X_train = x[2100:,:,:,:]
X_val = x[:2100,:,:,:]
del x
y_train = y[2100:]
y_val = y[:2100]
X_train.shape
y_train.shape
del y
X_train.shape
(22900, 200, 200, 3)
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
import tensorflow
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation, Conv2D
from keras.optimizers import RMSprop, SGD
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
keras.backend.clear_session()
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(200, 200, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_uniform'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# compile model
opt = SGD(lr=0.001, momentum=0.9)
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 200, 200, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 100, 100, 32) 0
_________________________________________________________________
dropout (Dropout) (None, 100, 100, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 100, 100, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 50, 50, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 50, 50, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 50, 50, 128) 73856
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 25, 25, 128) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 25, 25, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 25, 25, 256) 295168
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 256) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 12, 12, 256) 0
_________________________________________________________________
flatten (Flatten) (None, 36864) 0
_________________________________________________________________
dense (Dense) (None, 256) 9437440
_________________________________________________________________
dropout_4 (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 257
=================================================================
Total params: 9,826,113
Trainable params: 9,826,113
Non-trainable params: 0
_________________________________________________________________
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:375: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
import tensorflow as tf
class MulticlassTruePositives(tf.keras.metrics.Metric):
def __init__(self, name='multiclass_true_positives', **kwargs):
super(MulticlassTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, 'int32') == tf.cast(y_pred, 'int32')
values = tf.cast(values, 'float32')
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, 'float32')
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.)
from tensorflow import *
def recall(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
all_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (all_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_score(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(optimizer=SGD(learning_rate=0.001, momentum=0.9), loss='binary_crossentropy', metrics=['accuracy', tensorflow.keras.metrics.categorical_accuracy, tensorflow.keras.metrics.Precision(), tensorflow.keras.metrics.Recall(), tensorflow.keras.metrics.TopKCategoricalAccuracy(k=2)])
import datetime
start=datetime.datetime.now()
history = model.fit(X_train, y_train, epochs=60, validation_data=(X_val, y_val))
print(datetime.datetime.now() - start)
Epoch 1/60
716/716 [==============================] - 25s 33ms/step - loss: 1.2956 - accuracy: 0.5222 - categorical_accuracy: 1.0000 - precision: 0.5176 - recall: 0.4733 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6864 - val_accuracy: 0.5971 - val_categorical_accuracy: 1.0000 - val_precision: 0.5357 - val_recall: 0.5786 - val_top_k_categorical_accuracy: 1.0000
Epoch 2/60
716/716 [==============================] - 23s 32ms/step - loss: 0.6763 - accuracy: 0.5712 - categorical_accuracy: 1.0000 - precision: 0.5422 - recall: 0.6026 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6681 - val_accuracy: 0.6467 - val_categorical_accuracy: 1.0000 - val_precision: 0.5570 - val_recall: 0.6160 - val_top_k_categorical_accuracy: 1.0000
Epoch 3/60
716/716 [==============================] - 23s 32ms/step - loss: 0.6525 - accuracy: 0.6104 - categorical_accuracy: 1.0000 - precision: 0.5629 - recall: 0.6153 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6550 - val_accuracy: 0.6157 - val_categorical_accuracy: 1.0000 - val_precision: 0.5791 - val_recall: 0.6101 - val_top_k_categorical_accuracy: 1.0000
Epoch 4/60
716/716 [==============================] - 23s 32ms/step - loss: 0.6202 - accuracy: 0.6502 - categorical_accuracy: 1.0000 - precision: 0.5848 - recall: 0.6071 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6196 - val_accuracy: 0.6600 - val_categorical_accuracy: 1.0000 - val_precision: 0.5998 - val_recall: 0.6099 - val_top_k_categorical_accuracy: 1.0000
Epoch 5/60
716/716 [==============================] - 23s 32ms/step - loss: 0.5913 - accuracy: 0.6819 - categorical_accuracy: 1.0000 - precision: 0.6053 - recall: 0.6106 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6046 - val_accuracy: 0.6757 - val_categorical_accuracy: 1.0000 - val_precision: 0.6193 - val_recall: 0.6143 - val_top_k_categorical_accuracy: 1.0000
Epoch 6/60
716/716 [==============================] - 23s 32ms/step - loss: 0.5687 - accuracy: 0.6982 - categorical_accuracy: 1.0000 - precision: 0.6239 - recall: 0.6155 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5823 - val_accuracy: 0.6767 - val_categorical_accuracy: 1.0000 - val_precision: 0.6356 - val_recall: 0.6200 - val_top_k_categorical_accuracy: 1.0000
Epoch 7/60
716/716 [==============================] - 23s 32ms/step - loss: 0.5511 - accuracy: 0.7138 - categorical_accuracy: 1.0000 - precision: 0.6397 - recall: 0.6212 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5697 - val_accuracy: 0.7005 - val_categorical_accuracy: 1.0000 - val_precision: 0.6491 - val_recall: 0.6264 - val_top_k_categorical_accuracy: 1.0000
Epoch 8/60
716/716 [==============================] - 23s 32ms/step - loss: 0.5366 - accuracy: 0.7280 - categorical_accuracy: 1.0000 - precision: 0.6527 - recall: 0.6279 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5643 - val_accuracy: 0.6800 - val_categorical_accuracy: 1.0000 - val_precision: 0.6615 - val_recall: 0.6327 - val_top_k_categorical_accuracy: 1.0000
Epoch 9/60
716/716 [==============================] - 23s 32ms/step - loss: 0.5171 - accuracy: 0.7440 - categorical_accuracy: 1.0000 - precision: 0.6646 - recall: 0.6339 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5283 - val_accuracy: 0.7205 - val_categorical_accuracy: 1.0000 - val_precision: 0.6726 - val_recall: 0.6402 - val_top_k_categorical_accuracy: 1.0000
Epoch 10/60
716/716 [==============================] - 23s 32ms/step - loss: 0.5006 - accuracy: 0.7551 - categorical_accuracy: 1.0000 - precision: 0.6756 - recall: 0.6420 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5370 - val_accuracy: 0.7048 - val_categorical_accuracy: 1.0000 - val_precision: 0.6826 - val_recall: 0.6479 - val_top_k_categorical_accuracy: 1.0000
Epoch 11/60
716/716 [==============================] - 23s 32ms/step - loss: 0.4716 - accuracy: 0.7772 - categorical_accuracy: 1.0000 - precision: 0.6854 - recall: 0.6495 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4825 - val_accuracy: 0.7643 - val_categorical_accuracy: 1.0000 - val_precision: 0.6921 - val_recall: 0.6564 - val_top_k_categorical_accuracy: 1.0000
Epoch 12/60
716/716 [==============================] - 23s 32ms/step - loss: 0.4617 - accuracy: 0.7787 - categorical_accuracy: 1.0000 - precision: 0.6946 - recall: 0.6587 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4864 - val_accuracy: 0.7510 - val_categorical_accuracy: 1.0000 - val_precision: 0.7002 - val_recall: 0.6644 - val_top_k_categorical_accuracy: 1.0000
Epoch 13/60
716/716 [==============================] - 23s 32ms/step - loss: 0.4529 - accuracy: 0.7852 - categorical_accuracy: 1.0000 - precision: 0.7023 - recall: 0.6663 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4467 - val_accuracy: 0.7862 - val_categorical_accuracy: 1.0000 - val_precision: 0.7075 - val_recall: 0.6719 - val_top_k_categorical_accuracy: 1.0000
Epoch 14/60
716/716 [==============================] - 23s 32ms/step - loss: 0.4428 - accuracy: 0.7921 - categorical_accuracy: 1.0000 - precision: 0.7095 - recall: 0.6737 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4497 - val_accuracy: 0.7871 - val_categorical_accuracy: 1.0000 - val_precision: 0.7147 - val_recall: 0.6790 - val_top_k_categorical_accuracy: 1.0000
Epoch 15/60
716/716 [==============================] - 23s 32ms/step - loss: 0.4265 - accuracy: 0.8000 - categorical_accuracy: 1.0000 - precision: 0.7165 - recall: 0.6808 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4339 - val_accuracy: 0.8029 - val_categorical_accuracy: 1.0000 - val_precision: 0.7212 - val_recall: 0.6861 - val_top_k_categorical_accuracy: 1.0000
Epoch 16/60
716/716 [==============================] - 23s 32ms/step - loss: 0.4114 - accuracy: 0.8107 - categorical_accuracy: 1.0000 - precision: 0.7230 - recall: 0.6879 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4360 - val_accuracy: 0.7948 - val_categorical_accuracy: 1.0000 - val_precision: 0.7275 - val_recall: 0.6929 - val_top_k_categorical_accuracy: 1.0000
Epoch 17/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3956 - accuracy: 0.8215 - categorical_accuracy: 1.0000 - precision: 0.7293 - recall: 0.6947 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3943 - val_accuracy: 0.8229 - val_categorical_accuracy: 1.0000 - val_precision: 0.7335 - val_recall: 0.6995 - val_top_k_categorical_accuracy: 1.0000
Epoch 18/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3903 - accuracy: 0.8211 - categorical_accuracy: 1.0000 - precision: 0.7350 - recall: 0.7013 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3948 - val_accuracy: 0.8290 - val_categorical_accuracy: 1.0000 - val_precision: 0.7390 - val_recall: 0.7060 - val_top_k_categorical_accuracy: 1.0000
Epoch 19/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3723 - accuracy: 0.8336 - categorical_accuracy: 1.0000 - precision: 0.7406 - recall: 0.7077 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4042 - val_accuracy: 0.8243 - val_categorical_accuracy: 1.0000 - val_precision: 0.7445 - val_recall: 0.7123 - val_top_k_categorical_accuracy: 1.0000
Epoch 20/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3651 - accuracy: 0.8339 - categorical_accuracy: 1.0000 - precision: 0.7459 - recall: 0.7137 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3882 - val_accuracy: 0.8305 - val_categorical_accuracy: 1.0000 - val_precision: 0.7496 - val_recall: 0.7180 - val_top_k_categorical_accuracy: 1.0000
Epoch 21/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3470 - accuracy: 0.8453 - categorical_accuracy: 1.0000 - precision: 0.7510 - recall: 0.7194 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3952 - val_accuracy: 0.8310 - val_categorical_accuracy: 1.0000 - val_precision: 0.7546 - val_recall: 0.7236 - val_top_k_categorical_accuracy: 1.0000
Epoch 22/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3345 - accuracy: 0.8564 - categorical_accuracy: 1.0000 - precision: 0.7560 - recall: 0.7251 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4525 - val_accuracy: 0.8048 - val_categorical_accuracy: 1.0000 - val_precision: 0.7593 - val_recall: 0.7288 - val_top_k_categorical_accuracy: 1.0000
Epoch 23/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3326 - accuracy: 0.8543 - categorical_accuracy: 1.0000 - precision: 0.7606 - recall: 0.7300 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3518 - val_accuracy: 0.8476 - val_categorical_accuracy: 1.0000 - val_precision: 0.7637 - val_recall: 0.7340 - val_top_k_categorical_accuracy: 1.0000
Epoch 24/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3137 - accuracy: 0.8641 - categorical_accuracy: 1.0000 - precision: 0.7650 - recall: 0.7355 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3965 - val_accuracy: 0.8290 - val_categorical_accuracy: 1.0000 - val_precision: 0.7680 - val_recall: 0.7390 - val_top_k_categorical_accuracy: 1.0000
Epoch 25/60
716/716 [==============================] - 23s 32ms/step - loss: 0.3083 - accuracy: 0.8678 - categorical_accuracy: 1.0000 - precision: 0.7692 - recall: 0.7402 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3999 - val_accuracy: 0.8314 - val_categorical_accuracy: 1.0000 - val_precision: 0.7722 - val_recall: 0.7438 - val_top_k_categorical_accuracy: 1.0000
Epoch 26/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2992 - accuracy: 0.8725 - categorical_accuracy: 1.0000 - precision: 0.7733 - recall: 0.7450 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3715 - val_accuracy: 0.8481 - val_categorical_accuracy: 1.0000 - val_precision: 0.7762 - val_recall: 0.7486 - val_top_k_categorical_accuracy: 1.0000
Epoch 27/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2862 - accuracy: 0.8774 - categorical_accuracy: 1.0000 - precision: 0.7773 - recall: 0.7498 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3445 - val_accuracy: 0.8557 - val_categorical_accuracy: 1.0000 - val_precision: 0.7801 - val_recall: 0.7530 - val_top_k_categorical_accuracy: 1.0000
Epoch 28/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2772 - accuracy: 0.8841 - categorical_accuracy: 1.0000 - precision: 0.7812 - recall: 0.7543 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3530 - val_accuracy: 0.8633 - val_categorical_accuracy: 1.0000 - val_precision: 0.7838 - val_recall: 0.7575 - val_top_k_categorical_accuracy: 1.0000
Epoch 29/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2606 - accuracy: 0.8906 - categorical_accuracy: 1.0000 - precision: 0.7849 - recall: 0.7587 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3287 - val_accuracy: 0.8548 - val_categorical_accuracy: 1.0000 - val_precision: 0.7875 - val_recall: 0.7618 - val_top_k_categorical_accuracy: 1.0000
Epoch 30/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2544 - accuracy: 0.8960 - categorical_accuracy: 1.0000 - precision: 0.7884 - recall: 0.7630 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3349 - val_accuracy: 0.8590 - val_categorical_accuracy: 1.0000 - val_precision: 0.7910 - val_recall: 0.7661 - val_top_k_categorical_accuracy: 1.0000
Epoch 31/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2512 - accuracy: 0.8955 - categorical_accuracy: 1.0000 - precision: 0.7920 - recall: 0.7671 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3211 - val_accuracy: 0.8705 - val_categorical_accuracy: 1.0000 - val_precision: 0.7945 - val_recall: 0.7701 - val_top_k_categorical_accuracy: 1.0000
Epoch 32/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2311 - accuracy: 0.9010 - categorical_accuracy: 1.0000 - precision: 0.7954 - recall: 0.7712 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3241 - val_accuracy: 0.8648 - val_categorical_accuracy: 1.0000 - val_precision: 0.7978 - val_recall: 0.7741 - val_top_k_categorical_accuracy: 1.0000
Epoch 33/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2217 - accuracy: 0.9068 - categorical_accuracy: 1.0000 - precision: 0.7987 - recall: 0.7752 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3160 - val_accuracy: 0.8752 - val_categorical_accuracy: 1.0000 - val_precision: 0.8010 - val_recall: 0.7780 - val_top_k_categorical_accuracy: 1.0000
Epoch 34/60
716/716 [==============================] - 23s 32ms/step - loss: 0.2089 - accuracy: 0.9120 - categorical_accuracy: 1.0000 - precision: 0.8019 - recall: 0.7790 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3237 - val_accuracy: 0.8714 - val_categorical_accuracy: 1.0000 - val_precision: 0.8042 - val_recall: 0.7818 - val_top_k_categorical_accuracy: 1.0000
Epoch 35/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1966 - accuracy: 0.9171 - categorical_accuracy: 1.0000 - precision: 0.8051 - recall: 0.7827 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3078 - val_accuracy: 0.8705 - val_categorical_accuracy: 1.0000 - val_precision: 0.8073 - val_recall: 0.7855 - val_top_k_categorical_accuracy: 1.0000
Epoch 36/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1998 - accuracy: 0.9200 - categorical_accuracy: 1.0000 - precision: 0.8081 - recall: 0.7865 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3379 - val_accuracy: 0.8619 - val_categorical_accuracy: 1.0000 - val_precision: 0.8103 - val_recall: 0.7892 - val_top_k_categorical_accuracy: 1.0000
Epoch 37/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1812 - accuracy: 0.9273 - categorical_accuracy: 1.0000 - precision: 0.8112 - recall: 0.7901 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3230 - val_accuracy: 0.8743 - val_categorical_accuracy: 1.0000 - val_precision: 0.8134 - val_recall: 0.7926 - val_top_k_categorical_accuracy: 1.0000
Epoch 38/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1767 - accuracy: 0.9285 - categorical_accuracy: 1.0000 - precision: 0.8142 - recall: 0.7935 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3517 - val_accuracy: 0.8719 - val_categorical_accuracy: 1.0000 - val_precision: 0.8164 - val_recall: 0.7960 - val_top_k_categorical_accuracy: 1.0000
Epoch 39/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1622 - accuracy: 0.9365 - categorical_accuracy: 1.0000 - precision: 0.8172 - recall: 0.7969 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3253 - val_accuracy: 0.8810 - val_categorical_accuracy: 1.0000 - val_precision: 0.8194 - val_recall: 0.7994 - val_top_k_categorical_accuracy: 1.0000
Epoch 40/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1654 - accuracy: 0.9305 - categorical_accuracy: 1.0000 - precision: 0.8201 - recall: 0.8003 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3157 - val_accuracy: 0.8810 - val_categorical_accuracy: 1.0000 - val_precision: 0.8221 - val_recall: 0.8026 - val_top_k_categorical_accuracy: 1.0000
Epoch 41/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1447 - accuracy: 0.9408 - categorical_accuracy: 1.0000 - precision: 0.8229 - recall: 0.8035 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3995 - val_accuracy: 0.8662 - val_categorical_accuracy: 1.0000 - val_precision: 0.8249 - val_recall: 0.8058 - val_top_k_categorical_accuracy: 1.0000
Epoch 42/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1418 - accuracy: 0.9419 - categorical_accuracy: 1.0000 - precision: 0.8256 - recall: 0.8066 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3617 - val_accuracy: 0.8695 - val_categorical_accuracy: 1.0000 - val_precision: 0.8277 - val_recall: 0.8088 - val_top_k_categorical_accuracy: 1.0000
Epoch 43/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1374 - accuracy: 0.9475 - categorical_accuracy: 1.0000 - precision: 0.8284 - recall: 0.8096 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3455 - val_accuracy: 0.8819 - val_categorical_accuracy: 1.0000 - val_precision: 0.8304 - val_recall: 0.8118 - val_top_k_categorical_accuracy: 1.0000
Epoch 44/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1298 - accuracy: 0.9497 - categorical_accuracy: 1.0000 - precision: 0.8310 - recall: 0.8126 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3317 - val_accuracy: 0.8867 - val_categorical_accuracy: 1.0000 - val_precision: 0.8329 - val_recall: 0.8148 - val_top_k_categorical_accuracy: 1.0000
Epoch 45/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1197 - accuracy: 0.9501 - categorical_accuracy: 1.0000 - precision: 0.8336 - recall: 0.8156 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3351 - val_accuracy: 0.8852 - val_categorical_accuracy: 1.0000 - val_precision: 0.8355 - val_recall: 0.8177 - val_top_k_categorical_accuracy: 1.0000
Epoch 46/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1176 - accuracy: 0.9539 - categorical_accuracy: 1.0000 - precision: 0.8361 - recall: 0.8184 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3611 - val_accuracy: 0.8819 - val_categorical_accuracy: 1.0000 - val_precision: 0.8379 - val_recall: 0.8205 - val_top_k_categorical_accuracy: 1.0000
Epoch 47/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1151 - accuracy: 0.9548 - categorical_accuracy: 1.0000 - precision: 0.8386 - recall: 0.8212 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3520 - val_accuracy: 0.8781 - val_categorical_accuracy: 1.0000 - val_precision: 0.8403 - val_recall: 0.8232 - val_top_k_categorical_accuracy: 1.0000
Epoch 48/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1060 - accuracy: 0.9582 - categorical_accuracy: 1.0000 - precision: 0.8409 - recall: 0.8239 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3723 - val_accuracy: 0.8838 - val_categorical_accuracy: 1.0000 - val_precision: 0.8427 - val_recall: 0.8259 - val_top_k_categorical_accuracy: 1.0000
Epoch 49/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1005 - accuracy: 0.9610 - categorical_accuracy: 1.0000 - precision: 0.8433 - recall: 0.8266 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3584 - val_accuracy: 0.8795 - val_categorical_accuracy: 1.0000 - val_precision: 0.8450 - val_recall: 0.8285 - val_top_k_categorical_accuracy: 1.0000
Epoch 50/60
716/716 [==============================] - 23s 32ms/step - loss: 0.1002 - accuracy: 0.9630 - categorical_accuracy: 1.0000 - precision: 0.8456 - recall: 0.8292 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4041 - val_accuracy: 0.8776 - val_categorical_accuracy: 1.0000 - val_precision: 0.8473 - val_recall: 0.8310 - val_top_k_categorical_accuracy: 1.0000
Epoch 51/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0949 - accuracy: 0.9633 - categorical_accuracy: 1.0000 - precision: 0.8479 - recall: 0.8317 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3709 - val_accuracy: 0.8838 - val_categorical_accuracy: 1.0000 - val_precision: 0.8494 - val_recall: 0.8335 - val_top_k_categorical_accuracy: 1.0000
Epoch 52/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0877 - accuracy: 0.9678 - categorical_accuracy: 1.0000 - precision: 0.8500 - recall: 0.8341 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3825 - val_accuracy: 0.8871 - val_categorical_accuracy: 1.0000 - val_precision: 0.8516 - val_recall: 0.8359 - val_top_k_categorical_accuracy: 1.0000
Epoch 53/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0853 - accuracy: 0.9675 - categorical_accuracy: 1.0000 - precision: 0.8521 - recall: 0.8365 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4033 - val_accuracy: 0.8833 - val_categorical_accuracy: 1.0000 - val_precision: 0.8537 - val_recall: 0.8382 - val_top_k_categorical_accuracy: 1.0000
Epoch 54/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0766 - accuracy: 0.9721 - categorical_accuracy: 1.0000 - precision: 0.8542 - recall: 0.8388 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4285 - val_accuracy: 0.8833 - val_categorical_accuracy: 1.0000 - val_precision: 0.8558 - val_recall: 0.8405 - val_top_k_categorical_accuracy: 1.0000
Epoch 55/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0733 - accuracy: 0.9736 - categorical_accuracy: 1.0000 - precision: 0.8563 - recall: 0.8411 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4078 - val_accuracy: 0.8771 - val_categorical_accuracy: 1.0000 - val_precision: 0.8578 - val_recall: 0.8427 - val_top_k_categorical_accuracy: 1.0000
Epoch 56/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0760 - accuracy: 0.9714 - categorical_accuracy: 1.0000 - precision: 0.8583 - recall: 0.8433 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3888 - val_accuracy: 0.8829 - val_categorical_accuracy: 1.0000 - val_precision: 0.8597 - val_recall: 0.8449 - val_top_k_categorical_accuracy: 1.0000
Epoch 57/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0717 - accuracy: 0.9735 - categorical_accuracy: 1.0000 - precision: 0.8602 - recall: 0.8454 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4045 - val_accuracy: 0.8943 - val_categorical_accuracy: 1.0000 - val_precision: 0.8615 - val_recall: 0.8470 - val_top_k_categorical_accuracy: 1.0000
Epoch 58/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0724 - accuracy: 0.9717 - categorical_accuracy: 1.0000 - precision: 0.8620 - recall: 0.8475 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5566 - val_accuracy: 0.8671 - val_categorical_accuracy: 1.0000 - val_precision: 0.8634 - val_recall: 0.8490 - val_top_k_categorical_accuracy: 1.0000
Epoch 59/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0670 - accuracy: 0.9747 - categorical_accuracy: 1.0000 - precision: 0.8639 - recall: 0.8494 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3799 - val_accuracy: 0.8867 - val_categorical_accuracy: 1.0000 - val_precision: 0.8652 - val_recall: 0.8509 - val_top_k_categorical_accuracy: 1.0000
Epoch 60/60
716/716 [==============================] - 23s 32ms/step - loss: 0.0660 - accuracy: 0.9753 - categorical_accuracy: 1.0000 - precision: 0.8657 - recall: 0.8514 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4504 - val_accuracy: 0.8914 - val_categorical_accuracy: 1.0000 - val_precision: 0.8670 - val_recall: 0.8529 - val_top_k_categorical_accuracy: 1.0000
datetime.timedelta(seconds=1378, microseconds=414114)
pd.DataFrame(history.history)
| loss | accuracy | categorical_accuracy | precision | recall | top_k_categorical_accuracy | val_loss | val_accuracy | val_categorical_accuracy | val_precision | val_recall | val_top_k_categorical_accuracy | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0.791424 | 0.536245 | 1.0 | 0.520641 | 0.512125 | 1.0 | 0.686443 | 0.597143 | 1.0 | 0.535727 | 0.578594 | 1.0 |
| 1 | 0.671224 | 0.582620 | 1.0 | 0.546250 | 0.609359 | 1.0 | 0.668093 | 0.646667 | 1.0 | 0.557031 | 0.615976 | 1.0 |
| 2 | 0.644322 | 0.622445 | 1.0 | 0.567729 | 0.615537 | 1.0 | 0.654971 | 0.615714 | 1.0 | 0.579095 | 0.610115 | 1.0 |
| 3 | 0.612784 | 0.656900 | 1.0 | 0.589480 | 0.608225 | 1.0 | 0.619609 | 0.660000 | 1.0 | 0.599824 | 0.609920 | 1.0 |
| 4 | 0.587655 | 0.686550 | 1.0 | 0.609740 | 0.612273 | 1.0 | 0.604595 | 0.675714 | 1.0 | 0.619269 | 0.614253 | 1.0 |
| 5 | 0.567028 | 0.703799 | 1.0 | 0.627630 | 0.617413 | 1.0 | 0.582260 | 0.676667 | 1.0 | 0.635577 | 0.619997 | 1.0 |
| 6 | 0.550161 | 0.715502 | 1.0 | 0.642779 | 0.623376 | 1.0 | 0.569658 | 0.700476 | 1.0 | 0.649103 | 0.626421 | 1.0 |
| 7 | 0.530699 | 0.731485 | 1.0 | 0.655546 | 0.629941 | 1.0 | 0.564337 | 0.680000 | 1.0 | 0.661494 | 0.632653 | 1.0 |
| 8 | 0.513239 | 0.746550 | 1.0 | 0.667151 | 0.636178 | 1.0 | 0.528254 | 0.720476 | 1.0 | 0.672633 | 0.640157 | 1.0 |
| 9 | 0.496790 | 0.757948 | 1.0 | 0.677795 | 0.644264 | 1.0 | 0.537033 | 0.704762 | 1.0 | 0.682640 | 0.647853 | 1.0 |
| 10 | 0.478654 | 0.773362 | 1.0 | 0.687563 | 0.651918 | 1.0 | 0.482532 | 0.764286 | 1.0 | 0.692059 | 0.656392 | 1.0 |
| 11 | 0.465016 | 0.776681 | 1.0 | 0.696347 | 0.660874 | 1.0 | 0.486357 | 0.750952 | 1.0 | 0.700160 | 0.664448 | 1.0 |
| 12 | 0.454248 | 0.783624 | 1.0 | 0.703997 | 0.668253 | 1.0 | 0.446652 | 0.786190 | 1.0 | 0.707500 | 0.671931 | 1.0 |
| 13 | 0.440119 | 0.794017 | 1.0 | 0.711125 | 0.675503 | 1.0 | 0.449695 | 0.787143 | 1.0 | 0.714682 | 0.679013 | 1.0 |
| 14 | 0.425937 | 0.801135 | 1.0 | 0.717960 | 0.682536 | 1.0 | 0.433890 | 0.802857 | 1.0 | 0.721187 | 0.686134 | 1.0 |
| 15 | 0.411553 | 0.810830 | 1.0 | 0.724414 | 0.689603 | 1.0 | 0.435987 | 0.794762 | 1.0 | 0.727468 | 0.692927 | 1.0 |
| 16 | 0.397812 | 0.819782 | 1.0 | 0.730640 | 0.696334 | 1.0 | 0.394286 | 0.822857 | 1.0 | 0.733516 | 0.699499 | 1.0 |
| 17 | 0.385510 | 0.824542 | 1.0 | 0.736268 | 0.702850 | 1.0 | 0.394825 | 0.829048 | 1.0 | 0.739035 | 0.706048 | 1.0 |
| 18 | 0.372417 | 0.834978 | 1.0 | 0.741827 | 0.709310 | 1.0 | 0.404216 | 0.824286 | 1.0 | 0.744541 | 0.712287 | 1.0 |
| 19 | 0.365291 | 0.837205 | 1.0 | 0.747065 | 0.715081 | 1.0 | 0.388160 | 0.830476 | 1.0 | 0.749633 | 0.717951 | 1.0 |
| 20 | 0.348150 | 0.846332 | 1.0 | 0.752123 | 0.720832 | 1.0 | 0.395200 | 0.830952 | 1.0 | 0.754605 | 0.723580 | 1.0 |
| 21 | 0.337317 | 0.851659 | 1.0 | 0.757070 | 0.726427 | 1.0 | 0.452537 | 0.804762 | 1.0 | 0.759314 | 0.728843 | 1.0 |
| 22 | 0.327590 | 0.856725 | 1.0 | 0.761594 | 0.731354 | 1.0 | 0.351779 | 0.847619 | 1.0 | 0.763746 | 0.734034 | 1.0 |
| 23 | 0.318030 | 0.860961 | 1.0 | 0.765948 | 0.736707 | 1.0 | 0.396491 | 0.829048 | 1.0 | 0.767989 | 0.739039 | 1.0 |
| 24 | 0.306793 | 0.868472 | 1.0 | 0.770159 | 0.741414 | 1.0 | 0.399886 | 0.831429 | 1.0 | 0.772216 | 0.743797 | 1.0 |
| 25 | 0.299240 | 0.873668 | 1.0 | 0.774206 | 0.746129 | 1.0 | 0.371502 | 0.848095 | 1.0 | 0.776234 | 0.748551 | 1.0 |
| 26 | 0.290005 | 0.876114 | 1.0 | 0.778210 | 0.750856 | 1.0 | 0.344513 | 0.855714 | 1.0 | 0.780090 | 0.753034 | 1.0 |
| 27 | 0.277333 | 0.881703 | 1.0 | 0.782014 | 0.755353 | 1.0 | 0.352953 | 0.863333 | 1.0 | 0.783817 | 0.757509 | 1.0 |
| 28 | 0.267490 | 0.886943 | 1.0 | 0.785717 | 0.759701 | 1.0 | 0.328672 | 0.854762 | 1.0 | 0.787461 | 0.761813 | 1.0 |
| 29 | 0.253646 | 0.893406 | 1.0 | 0.789261 | 0.764014 | 1.0 | 0.334874 | 0.859048 | 1.0 | 0.791028 | 0.766072 | 1.0 |
| 30 | 0.248109 | 0.896550 | 1.0 | 0.792770 | 0.768102 | 1.0 | 0.321081 | 0.870476 | 1.0 | 0.794496 | 0.770133 | 1.0 |
| 31 | 0.234823 | 0.899476 | 1.0 | 0.796205 | 0.772184 | 1.0 | 0.324142 | 0.864762 | 1.0 | 0.797806 | 0.774093 | 1.0 |
| 32 | 0.226215 | 0.904061 | 1.0 | 0.799457 | 0.776082 | 1.0 | 0.315997 | 0.875238 | 1.0 | 0.801001 | 0.778000 | 1.0 |
| 33 | 0.212729 | 0.909127 | 1.0 | 0.802658 | 0.779901 | 1.0 | 0.323690 | 0.871429 | 1.0 | 0.804197 | 0.781759 | 1.0 |
| 34 | 0.203679 | 0.913493 | 1.0 | 0.805790 | 0.783618 | 1.0 | 0.307789 | 0.870476 | 1.0 | 0.807261 | 0.785485 | 1.0 |
| 35 | 0.199076 | 0.919607 | 1.0 | 0.808787 | 0.787405 | 1.0 | 0.337942 | 0.861905 | 1.0 | 0.810302 | 0.789153 | 1.0 |
| 36 | 0.186508 | 0.924192 | 1.0 | 0.811887 | 0.790908 | 1.0 | 0.323035 | 0.874286 | 1.0 | 0.813375 | 0.792607 | 1.0 |
| 37 | 0.176671 | 0.928384 | 1.0 | 0.814910 | 0.794347 | 1.0 | 0.351746 | 0.871905 | 1.0 | 0.816399 | 0.796002 | 1.0 |
| 38 | 0.167339 | 0.933231 | 1.0 | 0.817933 | 0.797736 | 1.0 | 0.325273 | 0.880952 | 1.0 | 0.819359 | 0.799382 | 1.0 |
| 39 | 0.165478 | 0.932009 | 1.0 | 0.820761 | 0.801024 | 1.0 | 0.315673 | 0.880952 | 1.0 | 0.822124 | 0.802622 | 1.0 |
| 40 | 0.150771 | 0.938035 | 1.0 | 0.823513 | 0.804287 | 1.0 | 0.399474 | 0.866190 | 1.0 | 0.824880 | 0.805804 | 1.0 |
| 41 | 0.142358 | 0.942838 | 1.0 | 0.826281 | 0.807333 | 1.0 | 0.361716 | 0.869524 | 1.0 | 0.827653 | 0.808847 | 1.0 |
| 42 | 0.137985 | 0.944585 | 1.0 | 0.829044 | 0.810303 | 1.0 | 0.345509 | 0.881905 | 1.0 | 0.830352 | 0.811791 | 1.0 |
| 43 | 0.132258 | 0.948341 | 1.0 | 0.831663 | 0.813330 | 1.0 | 0.331664 | 0.886667 | 1.0 | 0.832946 | 0.814808 | 1.0 |
| 44 | 0.123675 | 0.948865 | 1.0 | 0.834220 | 0.816262 | 1.0 | 0.335091 | 0.885238 | 1.0 | 0.835459 | 0.817674 | 1.0 |
| 45 | 0.121363 | 0.952664 | 1.0 | 0.836729 | 0.819087 | 1.0 | 0.361094 | 0.881905 | 1.0 | 0.837947 | 0.820474 | 1.0 |
| 46 | 0.117119 | 0.953843 | 1.0 | 0.839157 | 0.821871 | 1.0 | 0.351977 | 0.878095 | 1.0 | 0.840313 | 0.823198 | 1.0 |
| 47 | 0.107185 | 0.959083 | 1.0 | 0.841518 | 0.824548 | 1.0 | 0.372252 | 0.883810 | 1.0 | 0.842714 | 0.825877 | 1.0 |
| 48 | 0.103425 | 0.960480 | 1.0 | 0.843883 | 0.827187 | 1.0 | 0.358397 | 0.879524 | 1.0 | 0.845028 | 0.828499 | 1.0 |
| 49 | 0.099801 | 0.962445 | 1.0 | 0.846154 | 0.829805 | 1.0 | 0.404124 | 0.877619 | 1.0 | 0.847274 | 0.831047 | 1.0 |
| 50 | 0.096510 | 0.962358 | 1.0 | 0.848388 | 0.832281 | 1.0 | 0.370914 | 0.883810 | 1.0 | 0.849441 | 0.833482 | 1.0 |
| 51 | 0.089653 | 0.965633 | 1.0 | 0.850530 | 0.834724 | 1.0 | 0.382509 | 0.887143 | 1.0 | 0.851589 | 0.835892 | 1.0 |
| 52 | 0.089042 | 0.966681 | 1.0 | 0.852644 | 0.837072 | 1.0 | 0.403285 | 0.883333 | 1.0 | 0.853683 | 0.838220 | 1.0 |
| 53 | 0.077685 | 0.970873 | 1.0 | 0.854741 | 0.839368 | 1.0 | 0.428507 | 0.883333 | 1.0 | 0.855772 | 0.840514 | 1.0 |
| 54 | 0.080141 | 0.969869 | 1.0 | 0.856797 | 0.841634 | 1.0 | 0.407768 | 0.877143 | 1.0 | 0.857764 | 0.842708 | 1.0 |
| 55 | 0.077240 | 0.970873 | 1.0 | 0.858752 | 0.843803 | 1.0 | 0.388765 | 0.882857 | 1.0 | 0.859675 | 0.844872 | 1.0 |
| 56 | 0.074919 | 0.971179 | 1.0 | 0.860605 | 0.845962 | 1.0 | 0.404488 | 0.894286 | 1.0 | 0.861536 | 0.846978 | 1.0 |
| 57 | 0.071821 | 0.972707 | 1.0 | 0.862465 | 0.848015 | 1.0 | 0.556622 | 0.867143 | 1.0 | 0.863391 | 0.848955 | 1.0 |
| 58 | 0.066793 | 0.975371 | 1.0 | 0.864318 | 0.849905 | 1.0 | 0.379873 | 0.886667 | 1.0 | 0.865213 | 0.850926 | 1.0 |
| 59 | 0.063731 | 0.976070 | 1.0 | 0.866081 | 0.851918 | 1.0 | 0.450390 | 0.891429 | 1.0 | 0.866967 | 0.852883 | 1.0 |
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(20,20))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
del X_train
del X_val
jpg_list_test=os.listdir('/content/test')
x_test = [] # images as arrays
end1=0
for image in jpg_list_test:
x_test.append(cv2.resize(cv2.imread('/content/test/{}'.format(image)), (200,200), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_test))),2))
Images Processed: 12500
Percent Complete: 100
x_test=asarray(x_test, dtype='float32')
x_test=x_test/255
y_pred=model.predict(x_test, verbose=0)
y_pred
array([[1.5884418e-04],
[9.5029751e-17],
[8.9731878e-01],
...,
[8.8018015e-13],
[1.4180523e-24],
[1.2201853e-05]], dtype=float32)
y_pred
array([[1.5884418e-04],
[9.5029751e-17],
[8.9731878e-01],
...,
[8.8018015e-13],
[1.4180523e-24],
[1.2201853e-05]], dtype=float32)
y_pred.min()
1.2193826e-31
y_pred.max()
1.0
import matplotlib.pyplot as plt
image = Image.open('/content/test/{}'.format(jpg_list_test[0]))
plt.imshow(image)
<matplotlib.image.AxesImage at 0x7fee44042f50>
images=y_pred[0:36].astype(float)
images=np.around(images, decimals=4)
labeldgct=jpg_list_test[0:36]
im = Image.open("/content/test/9981.jpg")
plt.matshow(im)
<matplotlib.image.AxesImage at 0x7fedc41a5a10>
print(images[1])
array([0.])
fig, axs = plt.subplots(6, 6, figsize = (40, 40))
plt.gray()
for i, ax in enumerate(axs.flat):
i=500+i
ax.set_title('0-1, Cat-Dog: {}'.format(np.around(y_pred[i].astype(float), decimals=4)))
ax.matshow(Image.open('/content/test/{}'.format(jpg_list_test[i])))
ax.axis('off')
submission_df=pd.read_csv('/content/sample_submission.csv')
submission_df['label']=y_pred
submission_df.to_csv('/content/submission.csv', index=False)
submission_df
| id | label | |
|---|---|---|
| 0 | 1 | 1.588442e-04 |
| 1 | 2 | 9.502975e-17 |
| 2 | 3 | 8.973188e-01 |
| 3 | 4 | 3.751121e-01 |
| 4 | 5 | 9.999905e-01 |
| ... | ... | ... |
| 12495 | 12496 | 9.329692e-06 |
| 12496 | 12497 | 1.103463e-05 |
| 12497 | 12498 | 8.801802e-13 |
| 12498 | 12499 | 1.418052e-24 |
| 12499 | 12500 | 1.220185e-05 |
12500 rows × 2 columns
# from keras.utils.vis_utils import plot_model
# plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
!kaggle competitions submit -c dogs-vs-cats-redux-kernels-edition -f /content/submission.csv -m 'CNN3'
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
100% 200k/200k [00:02<00:00, 79.6kB/s]
Successfully submitted to Dogs vs. Cats Redux: Kernels Edition
3
!kaggle competitions submissions -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
fileName date description status publicScore privateScore
--------------------- ------------------- ----------- -------- ----------- ------------
submission.csv 2021-06-09 23:33:45 CNN3 complete 5.45001 5.45001
submission.csv 2021-06-09 22:25:38 CNN3 complete 1.62022 1.62022
submission.csv 2021-06-09 22:06:16 CNN3 complete 7.65715 7.65715
submission.csv 2021-06-09 20:42:45 CNN3 complete 3.71867 3.71867
submission.csv 2021-06-08 02:44:00 CNN2 complete 6.42962 6.42962
sample_submission.csv 2021-06-08 02:11:27 CNN1 complete 0.69314 0.69314
### MY KAGGLE USERNAME IS michaelrocchio2
from keras.utils.vis_utils import plot_model
plot_model(model, to_file='/content/model_plot.png', show_shapes=True, show_layer_names=True)
from google.colab import files
files.upload()
Saving kaggle.json to kaggle.json
{'kaggle.json': b'{"username":"michaelrocchio","key":"fcb9d1568595e76eab4ba8e2b41f9ff4"}'}
!mv /content/kaggle.json /root/.kaggle/kaggle.json
ls /root/.kaggle/
kaggle.json
import kaggle
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
!kaggle competitions download -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
train.zip: Skipping, found more recently modified local copy (use --force to force download)
test.zip: Skipping, found more recently modified local copy (use --force to force download)
sample_submission.csv: Skipping, found more recently modified local copy (use --force to force download)
import zipfile
from zipfile import *
zip_test = ZipFile('/content/test.zip')
zip_test.extractall()
zip_train = ZipFile('/content/train.zip')
zip_train.extractall()
import os
import pandas as pd
import numpy as np
jpg_list=os.listdir('/content/train')
labels_list=pd.DataFrame({
'label': jpg_list,
'filename': jpg_list
})
labels_list['label']=labels_list['label'].str[0:3]
labels_list['y']=np.where(labels_list['label']=='dog',1,0)
labels_list
| label | filename | y | |
|---|---|---|---|
| 0 | cat | cat.9673.jpg | 0 |
| 1 | cat | cat.3257.jpg | 0 |
| 2 | dog | dog.11897.jpg | 1 |
| 3 | dog | dog.8137.jpg | 1 |
| 4 | dog | dog.2316.jpg | 1 |
| ... | ... | ... | ... |
| 24995 | dog | dog.12438.jpg | 1 |
| 24996 | dog | dog.10947.jpg | 1 |
| 24997 | dog | dog.6583.jpg | 1 |
| 24998 | cat | cat.4690.jpg | 0 |
| 24999 | cat | cat.1629.jpg | 0 |
25000 rows × 3 columns
import sys
from PIL import Image
import time
import glob
from IPython.display import clear_output
import cv2
from numpy import asarray
jpg_list_train=jpg_list
x = [] # images as arrays
y = [] # labels
end1=0
for image in jpg_list_train:
x.append(cv2.resize(cv2.imread('/content/train/{}'.format(image)), (200,200), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_train))),2))
if 'dog' in image:
y.append(1)
elif 'cat' in image:
y.append(0)
Images Processed: 25000
Percent Complete: 100.0
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array, load_img
from keras import layers, models, optimizers
from keras import backend as K
from numpy import asarray
x=asarray(x, dtype='float32')
y=asarray(y, dtype='float32')
# x=np.transpose(x, (2, 1, 3, 0))
x.shape
(25000, 200, 200, 3)
x=x/255
y.shape
(25000,)
X_train = x[2100:,:,:,:]
X_val = x[:2100,:,:,:]
del x
y_train = y[2100:]
y_val = y[:2100]
X_train.shape
y_train.shape
del y
X_train.shape
(22900, 200, 200, 3)
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
import tensorflow
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation, Conv2D
from keras.optimizers import RMSprop, SGD
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
keras.backend.clear_session()
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(200, 200, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 200, 200, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 100, 100, 32) 0
_________________________________________________________________
dropout (Dropout) (None, 100, 100, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 100, 100, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 50, 50, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 50, 50, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 50, 50, 128) 73856
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 25, 25, 128) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 25, 25, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 80000) 0
_________________________________________________________________
dense (Dense) (None, 128) 10240128
_________________________________________________________________
dropout_3 (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 129
=================================================================
Total params: 10,333,505
Trainable params: 10,333,505
Non-trainable params: 0
_________________________________________________________________
import tensorflow as tf
class MulticlassTruePositives(tf.keras.metrics.Metric):
def __init__(self, name='multiclass_true_positives', **kwargs):
super(MulticlassTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, 'int32') == tf.cast(y_pred, 'int32')
values = tf.cast(values, 'float32')
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, 'float32')
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.)
from tensorflow import *
def recall(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
all_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (all_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_score(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(optimizer=SGD(learning_rate=0.001, momentum=0.9), loss='binary_crossentropy', metrics=['accuracy', tensorflow.keras.metrics.categorical_accuracy, tensorflow.keras.metrics.Precision(), tensorflow.keras.metrics.Recall(), tensorflow.keras.metrics.TopKCategoricalAccuracy(k=2)])
import datetime
start=datetime.datetime.now()
history = model.fit(X_train, y_train, epochs=35, validation_data=(X_val, y_val))
print(datetime.datetime.now() - start)
Epoch 1/35
716/716 [==============================] - 49s 30ms/step - loss: 0.8778 - accuracy: 0.5111 - categorical_accuracy: 1.0000 - precision: 0.5058 - recall: 0.5011 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6895 - val_accuracy: 0.5929 - val_categorical_accuracy: 1.0000 - val_precision: 0.5290 - val_recall: 0.4461 - val_top_k_categorical_accuracy: 1.0000
Epoch 2/35
716/716 [==============================] - 21s 29ms/step - loss: 0.6822 - accuracy: 0.5474 - categorical_accuracy: 1.0000 - precision: 0.5358 - recall: 0.4611 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6826 - val_accuracy: 0.5562 - val_categorical_accuracy: 1.0000 - val_precision: 0.5468 - val_recall: 0.4710 - val_top_k_categorical_accuracy: 1.0000
Epoch 3/35
716/716 [==============================] - 21s 29ms/step - loss: 0.6675 - accuracy: 0.5728 - categorical_accuracy: 1.0000 - precision: 0.5515 - recall: 0.4691 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6686 - val_accuracy: 0.6000 - val_categorical_accuracy: 1.0000 - val_precision: 0.5638 - val_recall: 0.4779 - val_top_k_categorical_accuracy: 1.0000
Epoch 4/35
716/716 [==============================] - 21s 29ms/step - loss: 0.6418 - accuracy: 0.6178 - categorical_accuracy: 1.0000 - precision: 0.5703 - recall: 0.4777 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6262 - val_accuracy: 0.6505 - val_categorical_accuracy: 1.0000 - val_precision: 0.5853 - val_recall: 0.4895 - val_top_k_categorical_accuracy: 1.0000
Epoch 5/35
716/716 [==============================] - 21s 29ms/step - loss: 0.6144 - accuracy: 0.6511 - categorical_accuracy: 1.0000 - precision: 0.5912 - recall: 0.4939 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6128 - val_accuracy: 0.6652 - val_categorical_accuracy: 1.0000 - val_precision: 0.6069 - val_recall: 0.5052 - val_top_k_categorical_accuracy: 1.0000
Epoch 6/35
716/716 [==============================] - 21s 29ms/step - loss: 0.5945 - accuracy: 0.6692 - categorical_accuracy: 1.0000 - precision: 0.6121 - recall: 0.5072 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5931 - val_accuracy: 0.6843 - val_categorical_accuracy: 1.0000 - val_precision: 0.6254 - val_recall: 0.5159 - val_top_k_categorical_accuracy: 1.0000
Epoch 7/35
716/716 [==============================] - 21s 29ms/step - loss: 0.5752 - accuracy: 0.6928 - categorical_accuracy: 1.0000 - precision: 0.6303 - recall: 0.5189 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5629 - val_accuracy: 0.6990 - val_categorical_accuracy: 1.0000 - val_precision: 0.6418 - val_recall: 0.5289 - val_top_k_categorical_accuracy: 1.0000
Epoch 8/35
716/716 [==============================] - 21s 29ms/step - loss: 0.5478 - accuracy: 0.7078 - categorical_accuracy: 1.0000 - precision: 0.6460 - recall: 0.5317 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5300 - val_accuracy: 0.7538 - val_categorical_accuracy: 1.0000 - val_precision: 0.6559 - val_recall: 0.5431 - val_top_k_categorical_accuracy: 1.0000
Epoch 9/35
716/716 [==============================] - 21s 29ms/step - loss: 0.5213 - accuracy: 0.7339 - categorical_accuracy: 1.0000 - precision: 0.6597 - recall: 0.5485 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5179 - val_accuracy: 0.7538 - val_categorical_accuracy: 1.0000 - val_precision: 0.6678 - val_recall: 0.5608 - val_top_k_categorical_accuracy: 1.0000
Epoch 10/35
716/716 [==============================] - 21s 29ms/step - loss: 0.5110 - accuracy: 0.7410 - categorical_accuracy: 1.0000 - precision: 0.6706 - recall: 0.5654 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4928 - val_accuracy: 0.7581 - val_categorical_accuracy: 1.0000 - val_precision: 0.6779 - val_recall: 0.5771 - val_top_k_categorical_accuracy: 1.0000
Epoch 11/35
716/716 [==============================] - 21s 29ms/step - loss: 0.4838 - accuracy: 0.7648 - categorical_accuracy: 1.0000 - precision: 0.6808 - recall: 0.5811 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4823 - val_accuracy: 0.7638 - val_categorical_accuracy: 1.0000 - val_precision: 0.6876 - val_recall: 0.5925 - val_top_k_categorical_accuracy: 1.0000
Epoch 12/35
716/716 [==============================] - 21s 29ms/step - loss: 0.4764 - accuracy: 0.7757 - categorical_accuracy: 1.0000 - precision: 0.6903 - recall: 0.5966 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4780 - val_accuracy: 0.7752 - val_categorical_accuracy: 1.0000 - val_precision: 0.6961 - val_recall: 0.6070 - val_top_k_categorical_accuracy: 1.0000
Epoch 13/35
716/716 [==============================] - 21s 29ms/step - loss: 0.4572 - accuracy: 0.7805 - categorical_accuracy: 1.0000 - precision: 0.6983 - recall: 0.6106 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4562 - val_accuracy: 0.7862 - val_categorical_accuracy: 1.0000 - val_precision: 0.7036 - val_recall: 0.6201 - val_top_k_categorical_accuracy: 1.0000
Epoch 14/35
716/716 [==============================] - 21s 29ms/step - loss: 0.4362 - accuracy: 0.7928 - categorical_accuracy: 1.0000 - precision: 0.7058 - recall: 0.6233 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4293 - val_accuracy: 0.8095 - val_categorical_accuracy: 1.0000 - val_precision: 0.7109 - val_recall: 0.6323 - val_top_k_categorical_accuracy: 1.0000
Epoch 15/35
716/716 [==============================] - 21s 29ms/step - loss: 0.4213 - accuracy: 0.8059 - categorical_accuracy: 1.0000 - precision: 0.7129 - recall: 0.6354 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4202 - val_accuracy: 0.8105 - val_categorical_accuracy: 1.0000 - val_precision: 0.7179 - val_recall: 0.6438 - val_top_k_categorical_accuracy: 1.0000
Epoch 16/35
716/716 [==============================] - 21s 29ms/step - loss: 0.4140 - accuracy: 0.8055 - categorical_accuracy: 1.0000 - precision: 0.7196 - recall: 0.6467 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4137 - val_accuracy: 0.8195 - val_categorical_accuracy: 1.0000 - val_precision: 0.7240 - val_recall: 0.6543 - val_top_k_categorical_accuracy: 1.0000
Epoch 17/35
716/716 [==============================] - 21s 29ms/step - loss: 0.4020 - accuracy: 0.8155 - categorical_accuracy: 1.0000 - precision: 0.7257 - recall: 0.6571 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4127 - val_accuracy: 0.8195 - val_categorical_accuracy: 1.0000 - val_precision: 0.7300 - val_recall: 0.6642 - val_top_k_categorical_accuracy: 1.0000
Epoch 18/35
716/716 [==============================] - 21s 29ms/step - loss: 0.3812 - accuracy: 0.8237 - categorical_accuracy: 1.0000 - precision: 0.7316 - recall: 0.6668 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4815 - val_accuracy: 0.7790 - val_categorical_accuracy: 1.0000 - val_precision: 0.7360 - val_recall: 0.6731 - val_top_k_categorical_accuracy: 1.0000
Epoch 19/35
716/716 [==============================] - 21s 29ms/step - loss: 0.3701 - accuracy: 0.8352 - categorical_accuracy: 1.0000 - precision: 0.7377 - recall: 0.6751 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3958 - val_accuracy: 0.8238 - val_categorical_accuracy: 1.0000 - val_precision: 0.7416 - val_recall: 0.6813 - val_top_k_categorical_accuracy: 1.0000
Epoch 20/35
716/716 [==============================] - 21s 29ms/step - loss: 0.3543 - accuracy: 0.8411 - categorical_accuracy: 1.0000 - precision: 0.7431 - recall: 0.6836 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4074 - val_accuracy: 0.8176 - val_categorical_accuracy: 1.0000 - val_precision: 0.7467 - val_recall: 0.6892 - val_top_k_categorical_accuracy: 1.0000
Epoch 21/35
716/716 [==============================] - 21s 29ms/step - loss: 0.3452 - accuracy: 0.8469 - categorical_accuracy: 1.0000 - precision: 0.7481 - recall: 0.6911 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4039 - val_accuracy: 0.8190 - val_categorical_accuracy: 1.0000 - val_precision: 0.7519 - val_recall: 0.6966 - val_top_k_categorical_accuracy: 1.0000
Epoch 22/35
716/716 [==============================] - 21s 29ms/step - loss: 0.3277 - accuracy: 0.8569 - categorical_accuracy: 1.0000 - precision: 0.7532 - recall: 0.6985 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3875 - val_accuracy: 0.8267 - val_categorical_accuracy: 1.0000 - val_precision: 0.7567 - val_recall: 0.7039 - val_top_k_categorical_accuracy: 1.0000
Epoch 23/35
716/716 [==============================] - 21s 29ms/step - loss: 0.3130 - accuracy: 0.8606 - categorical_accuracy: 1.0000 - precision: 0.7579 - recall: 0.7057 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3882 - val_accuracy: 0.8295 - val_categorical_accuracy: 1.0000 - val_precision: 0.7614 - val_recall: 0.7108 - val_top_k_categorical_accuracy: 1.0000
Epoch 24/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2928 - accuracy: 0.8750 - categorical_accuracy: 1.0000 - precision: 0.7627 - recall: 0.7127 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3864 - val_accuracy: 0.8295 - val_categorical_accuracy: 1.0000 - val_precision: 0.7659 - val_recall: 0.7175 - val_top_k_categorical_accuracy: 1.0000
Epoch 25/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2841 - accuracy: 0.8755 - categorical_accuracy: 1.0000 - precision: 0.7671 - recall: 0.7192 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3843 - val_accuracy: 0.8314 - val_categorical_accuracy: 1.0000 - val_precision: 0.7702 - val_recall: 0.7238 - val_top_k_categorical_accuracy: 1.0000
Epoch 26/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2657 - accuracy: 0.8847 - categorical_accuracy: 1.0000 - precision: 0.7714 - recall: 0.7255 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3773 - val_accuracy: 0.8295 - val_categorical_accuracy: 1.0000 - val_precision: 0.7744 - val_recall: 0.7299 - val_top_k_categorical_accuracy: 1.0000
Epoch 27/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2586 - accuracy: 0.8870 - categorical_accuracy: 1.0000 - precision: 0.7755 - recall: 0.7316 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3891 - val_accuracy: 0.8319 - val_categorical_accuracy: 1.0000 - val_precision: 0.7784 - val_recall: 0.7357 - val_top_k_categorical_accuracy: 1.0000
Epoch 28/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2555 - accuracy: 0.8893 - categorical_accuracy: 1.0000 - precision: 0.7794 - recall: 0.7372 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4025 - val_accuracy: 0.8329 - val_categorical_accuracy: 1.0000 - val_precision: 0.7822 - val_recall: 0.7412 - val_top_k_categorical_accuracy: 1.0000
Epoch 29/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2497 - accuracy: 0.8925 - categorical_accuracy: 1.0000 - precision: 0.7832 - recall: 0.7425 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3976 - val_accuracy: 0.8352 - val_categorical_accuracy: 1.0000 - val_precision: 0.7860 - val_recall: 0.7464 - val_top_k_categorical_accuracy: 1.0000
Epoch 30/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2293 - accuracy: 0.9014 - categorical_accuracy: 1.0000 - precision: 0.7870 - recall: 0.7478 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3920 - val_accuracy: 0.8371 - val_categorical_accuracy: 1.0000 - val_precision: 0.7897 - val_recall: 0.7516 - val_top_k_categorical_accuracy: 1.0000
Epoch 31/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2190 - accuracy: 0.9049 - categorical_accuracy: 1.0000 - precision: 0.7906 - recall: 0.7529 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4088 - val_accuracy: 0.8305 - val_categorical_accuracy: 1.0000 - val_precision: 0.7932 - val_recall: 0.7564 - val_top_k_categorical_accuracy: 1.0000
Epoch 32/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2143 - accuracy: 0.9090 - categorical_accuracy: 1.0000 - precision: 0.7942 - recall: 0.7577 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4257 - val_accuracy: 0.8324 - val_categorical_accuracy: 1.0000 - val_precision: 0.7966 - val_recall: 0.7612 - val_top_k_categorical_accuracy: 1.0000
Epoch 33/35
716/716 [==============================] - 21s 29ms/step - loss: 0.2104 - accuracy: 0.9119 - categorical_accuracy: 1.0000 - precision: 0.7975 - recall: 0.7624 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4094 - val_accuracy: 0.8352 - val_categorical_accuracy: 1.0000 - val_precision: 0.7999 - val_recall: 0.7659 - val_top_k_categorical_accuracy: 1.0000
Epoch 34/35
716/716 [==============================] - 21s 29ms/step - loss: 0.1894 - accuracy: 0.9188 - categorical_accuracy: 1.0000 - precision: 0.8008 - recall: 0.7671 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4136 - val_accuracy: 0.8467 - val_categorical_accuracy: 1.0000 - val_precision: 0.8032 - val_recall: 0.7703 - val_top_k_categorical_accuracy: 1.0000
Epoch 35/35
716/716 [==============================] - 21s 29ms/step - loss: 0.1844 - accuracy: 0.9244 - categorical_accuracy: 1.0000 - precision: 0.8041 - recall: 0.7715 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4204 - val_accuracy: 0.8438 - val_categorical_accuracy: 1.0000 - val_precision: 0.8065 - val_recall: 0.7747 - val_top_k_categorical_accuracy: 1.0000
datetime.timedelta(seconds=754, microseconds=606866)
pd.DataFrame(history.history)
| loss | accuracy | categorical_accuracy | precision | recall | top_k_categorical_accuracy | val_loss | val_accuracy | val_categorical_accuracy | val_precision | val_recall | val_top_k_categorical_accuracy | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0.724434 | 0.521397 | 1.0 | 0.509375 | 0.477030 | 1.0 | 0.689512 | 0.592857 | 1.0 | 0.529041 | 0.446105 | 1.0 |
| 1 | 0.678892 | 0.553712 | 1.0 | 0.538909 | 0.470867 | 1.0 | 0.682591 | 0.556190 | 1.0 | 0.546762 | 0.470956 | 1.0 |
| 2 | 0.661996 | 0.581179 | 1.0 | 0.555236 | 0.473220 | 1.0 | 0.668649 | 0.600000 | 1.0 | 0.563759 | 0.477887 | 1.0 |
| 3 | 0.636668 | 0.622183 | 1.0 | 0.575040 | 0.481764 | 1.0 | 0.626162 | 0.650476 | 1.0 | 0.585261 | 0.489466 | 1.0 |
| 4 | 0.608815 | 0.659039 | 1.0 | 0.596216 | 0.497735 | 1.0 | 0.612804 | 0.665238 | 1.0 | 0.606862 | 0.505179 | 1.0 |
| 5 | 0.587347 | 0.675458 | 1.0 | 0.616223 | 0.510057 | 1.0 | 0.593132 | 0.684286 | 1.0 | 0.625440 | 0.515896 | 1.0 |
| 6 | 0.567707 | 0.697031 | 1.0 | 0.634035 | 0.522301 | 1.0 | 0.562943 | 0.699048 | 1.0 | 0.641796 | 0.528873 | 1.0 |
| 7 | 0.546803 | 0.713013 | 1.0 | 0.649189 | 0.534877 | 1.0 | 0.530004 | 0.753810 | 1.0 | 0.655894 | 0.543111 | 1.0 |
| 8 | 0.523817 | 0.732838 | 1.0 | 0.662326 | 0.552509 | 1.0 | 0.517913 | 0.753810 | 1.0 | 0.667775 | 0.560772 | 1.0 |
| 9 | 0.508369 | 0.744978 | 1.0 | 0.672876 | 0.569161 | 1.0 | 0.492803 | 0.758095 | 1.0 | 0.677866 | 0.577066 | 1.0 |
| 10 | 0.486786 | 0.764192 | 1.0 | 0.682927 | 0.584818 | 1.0 | 0.482347 | 0.763810 | 1.0 | 0.687615 | 0.592549 | 1.0 |
| 11 | 0.475505 | 0.772926 | 1.0 | 0.692145 | 0.600161 | 1.0 | 0.478030 | 0.775238 | 1.0 | 0.696075 | 0.606965 | 1.0 |
| 12 | 0.456647 | 0.781266 | 1.0 | 0.700002 | 0.613702 | 1.0 | 0.456184 | 0.786190 | 1.0 | 0.703644 | 0.620091 | 1.0 |
| 13 | 0.442522 | 0.791747 | 1.0 | 0.707438 | 0.626125 | 1.0 | 0.429321 | 0.809524 | 1.0 | 0.710869 | 0.632260 | 1.0 |
| 14 | 0.424708 | 0.804760 | 1.0 | 0.714494 | 0.638019 | 1.0 | 0.420226 | 0.810476 | 1.0 | 0.717897 | 0.643803 | 1.0 |
| 15 | 0.413456 | 0.807118 | 1.0 | 0.721020 | 0.649078 | 1.0 | 0.413680 | 0.819524 | 1.0 | 0.724002 | 0.654339 | 1.0 |
| 16 | 0.400640 | 0.818384 | 1.0 | 0.727091 | 0.659364 | 1.0 | 0.412704 | 0.819524 | 1.0 | 0.730032 | 0.664218 | 1.0 |
| 17 | 0.377844 | 0.829127 | 1.0 | 0.732978 | 0.668995 | 1.0 | 0.481466 | 0.779048 | 1.0 | 0.735981 | 0.673145 | 1.0 |
| 18 | 0.369642 | 0.835022 | 1.0 | 0.738963 | 0.677049 | 1.0 | 0.395840 | 0.823810 | 1.0 | 0.741637 | 0.681271 | 1.0 |
| 19 | 0.359303 | 0.837948 | 1.0 | 0.744270 | 0.685431 | 1.0 | 0.407413 | 0.817619 | 1.0 | 0.746732 | 0.689166 | 1.0 |
| 20 | 0.343544 | 0.848908 | 1.0 | 0.749299 | 0.692885 | 1.0 | 0.403859 | 0.819048 | 1.0 | 0.751888 | 0.696634 | 1.0 |
| 21 | 0.332050 | 0.854804 | 1.0 | 0.754342 | 0.700240 | 1.0 | 0.387518 | 0.826667 | 1.0 | 0.756699 | 0.703850 | 1.0 |
| 22 | 0.315406 | 0.862271 | 1.0 | 0.759058 | 0.707359 | 1.0 | 0.388164 | 0.829524 | 1.0 | 0.761405 | 0.710784 | 1.0 |
| 23 | 0.302029 | 0.868035 | 1.0 | 0.763752 | 0.714247 | 1.0 | 0.386419 | 0.829524 | 1.0 | 0.765871 | 0.717456 | 1.0 |
| 24 | 0.288327 | 0.874367 | 1.0 | 0.768131 | 0.720657 | 1.0 | 0.384312 | 0.831429 | 1.0 | 0.770244 | 0.723779 | 1.0 |
| 25 | 0.275659 | 0.880131 | 1.0 | 0.772444 | 0.726899 | 1.0 | 0.377308 | 0.829524 | 1.0 | 0.774432 | 0.729869 | 1.0 |
| 26 | 0.267564 | 0.883493 | 1.0 | 0.776484 | 0.732943 | 1.0 | 0.389065 | 0.831905 | 1.0 | 0.778354 | 0.735715 | 1.0 |
| 27 | 0.259372 | 0.887380 | 1.0 | 0.780283 | 0.738502 | 1.0 | 0.402537 | 0.832857 | 1.0 | 0.782205 | 0.741151 | 1.0 |
| 28 | 0.246701 | 0.893668 | 1.0 | 0.784076 | 0.743785 | 1.0 | 0.397590 | 0.835238 | 1.0 | 0.785957 | 0.746427 | 1.0 |
| 29 | 0.231150 | 0.899869 | 1.0 | 0.787850 | 0.749028 | 1.0 | 0.392004 | 0.837143 | 1.0 | 0.789656 | 0.751563 | 1.0 |
| 30 | 0.221664 | 0.903450 | 1.0 | 0.791483 | 0.754037 | 1.0 | 0.408818 | 0.830476 | 1.0 | 0.793241 | 0.756389 | 1.0 |
| 31 | 0.213308 | 0.907729 | 1.0 | 0.795009 | 0.758781 | 1.0 | 0.425673 | 0.832381 | 1.0 | 0.796639 | 0.761161 | 1.0 |
| 32 | 0.207637 | 0.912358 | 1.0 | 0.798287 | 0.763571 | 1.0 | 0.409442 | 0.835238 | 1.0 | 0.799903 | 0.765861 | 1.0 |
| 33 | 0.193121 | 0.917555 | 1.0 | 0.801594 | 0.768150 | 1.0 | 0.413563 | 0.846667 | 1.0 | 0.803210 | 0.770316 | 1.0 |
| 34 | 0.186289 | 0.924367 | 1.0 | 0.804869 | 0.772511 | 1.0 | 0.420357 | 0.843810 | 1.0 | 0.806520 | 0.774697 | 1.0 |
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(20,20),linewidth=5.0)
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
del X_train
del X_val
jpg_list_test=os.listdir('/content/test')
x_test = [] # images as arrays
end1=0
for image in jpg_list_test:
x_test.append(cv2.resize(cv2.imread('/content/test/{}'.format(image)), (200,200), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_test))),2))
Images Processed: 12500
Percent Complete: 100
x_test=asarray(x_test, dtype='float32')
x_test=x_test/255
y_pred=model.predict(x_test, verbose=0)
y_pred
array([[3.0871839e-04],
[5.5686864e-06],
[8.7085915e-01],
...,
[6.3323751e-05],
[8.4628209e-07],
[3.8253289e-02]], dtype=float32)
y_pred
array([[3.0871839e-04],
[5.5686864e-06],
[8.7085915e-01],
...,
[6.3323751e-05],
[8.4628209e-07],
[3.8253289e-02]], dtype=float32)
y_pred.min()
6.6707626e-14
y_pred.max()
1.0
import matplotlib.pyplot as plt
image = Image.open('/content/test/{}'.format(jpg_list_test[0]))
plt.imshow(image)
<matplotlib.image.AxesImage at 0x7f5152ad9110>
images=y_pred[0:36].astype(float)
images=np.around(images, decimals=4)
labeldgct=jpg_list_test[0:36]
im = Image.open("/content/test/9981.jpg")
plt.matshow(im)
<matplotlib.image.AxesImage at 0x7f5152154cd0>
print(images[1])
array([0.])
fig, axs = plt.subplots(6, 6, figsize = (40, 40))
plt.gray()
for i, ax in enumerate(axs.flat):
i=500+i
ax.set_title('0-1, Cat-Dog: {}'.format(np.around(y_pred[i].astype(float), decimals=4)))
ax.matshow(Image.open('/content/test/{}'.format(jpg_list_test[i])))
ax.axis('off')
submission_df=pd.read_csv('/content/sample_submission.csv')
submission_df['label']=y_pred
submission_df.to_csv('/content/submission.csv', index=False)
submission_df
| id | label | |
|---|---|---|
| 0 | 1 | 3.087184e-04 |
| 1 | 2 | 5.568686e-06 |
| 2 | 3 | 8.708591e-01 |
| 3 | 4 | 3.380682e-01 |
| 4 | 5 | 9.707786e-01 |
| ... | ... | ... |
| 12495 | 12496 | 2.585572e-02 |
| 12496 | 12497 | 4.696850e-02 |
| 12497 | 12498 | 6.332375e-05 |
| 12498 | 12499 | 8.462821e-07 |
| 12499 | 12500 | 3.825329e-02 |
12500 rows × 2 columns
# from keras.utils.vis_utils import plot_model
# plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
!kaggle competitions submit -c dogs-vs-cats-redux-kernels-edition -f /content/submission.csv -m 'CNN3'
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
100% 198k/198k [00:01<00:00, 106kB/s]
Successfully submitted to Dogs vs. Cats Redux: Kernels Edition
3
!kaggle competitions submissions -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
fileName date description status publicScore privateScore
--------------------- ------------------- ----------- -------- ----------- ------------
submission.csv 2021-06-09 23:55:44 CNN3 complete 2.50312 2.50312
submission.csv 2021-06-09 23:33:45 CNN3 complete 5.45001 5.45001
submission.csv 2021-06-09 22:25:38 CNN3 complete 1.62022 1.62022
submission.csv 2021-06-09 22:06:16 CNN3 complete 7.65715 7.65715
submission.csv 2021-06-09 20:42:45 CNN3 complete 3.71867 3.71867
submission.csv 2021-06-08 02:44:00 CNN2 complete 6.42962 6.42962
sample_submission.csv 2021-06-08 02:11:27 CNN1 complete 0.69314 0.69314
### MY KAGGLE USERNAME IS michaelrocchio2
from keras.utils.vis_utils import plot_model
plot_model(model, to_file='/content/model_plot.png', show_shapes=True, show_layer_names=True)
from google.colab import files
files.upload()
Saving kaggle.json to kaggle.json
{'kaggle.json': b'{"username":"michaelrocchio","key":"fcb9d1568595e76eab4ba8e2b41f9ff4"}'}
!mv /content/kaggle.json /root/.kaggle/kaggle.json
ls /root/.kaggle/
kaggle.json
import kaggle
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
!kaggle competitions download -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
train.zip: Skipping, found more recently modified local copy (use --force to force download)
test.zip: Skipping, found more recently modified local copy (use --force to force download)
sample_submission.csv: Skipping, found more recently modified local copy (use --force to force download)
import zipfile
from zipfile import *
zip_test = ZipFile('/content/test.zip')
zip_test.extractall()
zip_train = ZipFile('/content/train.zip')
zip_train.extractall()
import os
import pandas as pd
import numpy as np
jpg_list=os.listdir('/content/train')
labels_list=pd.DataFrame({
'label': jpg_list,
'filename': jpg_list
})
labels_list['label']=labels_list['label'].str[0:3]
labels_list['y']=np.where(labels_list['label']=='dog',1,0)
labels_list
| label | filename | y | |
|---|---|---|---|
| 0 | cat | cat.9673.jpg | 0 |
| 1 | cat | cat.3257.jpg | 0 |
| 2 | dog | dog.11897.jpg | 1 |
| 3 | dog | dog.8137.jpg | 1 |
| 4 | dog | dog.2316.jpg | 1 |
| ... | ... | ... | ... |
| 24995 | dog | dog.12438.jpg | 1 |
| 24996 | dog | dog.10947.jpg | 1 |
| 24997 | dog | dog.6583.jpg | 1 |
| 24998 | cat | cat.4690.jpg | 0 |
| 24999 | cat | cat.1629.jpg | 0 |
25000 rows × 3 columns
import sys
from PIL import Image
import time
import glob
from IPython.display import clear_output
import cv2
from numpy import asarray
jpg_list_train=jpg_list
x = [] # images as arrays
y = [] # labels
end1=0
for image in jpg_list_train:
x.append(cv2.resize(cv2.imread('/content/train/{}'.format(image)), (200,200), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_train))),2))
if 'dog' in image:
y.append(1)
elif 'cat' in image:
y.append(0)
Images Processed: 25000
Percent Complete: 100.0
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array, load_img
from keras import layers, models, optimizers
from keras import backend as K
from numpy import asarray
x=asarray(x, dtype='float32')
y=asarray(y, dtype='float32')
# x=np.transpose(x, (2, 1, 3, 0))
x.shape
(25000, 200, 200, 3)
x=x/255
y.shape
(25000,)
X_train = x[2100:,:,:,:]
X_val = x[:2100,:,:,:]
del x
y_train = y[2100:]
y_val = y[:2100]
X_train.shape
y_train.shape
del y
X_train.shape
(22900, 200, 200, 3)
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
import tensorflow
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, Dense, Activation, Conv2D
from keras.optimizers import RMSprop, SGD
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
keras.backend.clear_session()
model = Sequential()
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(200, 200, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_uniform'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 200, 200, 64) 1792
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 100, 100, 64) 0
_________________________________________________________________
dropout (Dropout) (None, 100, 100, 64) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 100, 100, 128) 73856
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 50, 50, 128) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 50, 50, 128) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 50, 50, 256) 295168
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 25, 25, 256) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 25, 25, 256) 0
_________________________________________________________________
flatten (Flatten) (None, 160000) 0
_________________________________________________________________
dense (Dense) (None, 256) 40960256
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 257
=================================================================
Total params: 41,331,329
Trainable params: 41,331,329
Non-trainable params: 0
_________________________________________________________________
import tensorflow as tf
class MulticlassTruePositives(tf.keras.metrics.Metric):
def __init__(self, name='multiclass_true_positives', **kwargs):
super(MulticlassTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, 'int32') == tf.cast(y_pred, 'int32')
values = tf.cast(values, 'float32')
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, 'float32')
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.)
from tensorflow import *
def recall(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
all_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (all_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
y_true = K.ones_like(y_true)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_score(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(optimizer=SGD(learning_rate=0.001, momentum=0.9), loss='binary_crossentropy', metrics=['accuracy', tensorflow.keras.metrics.categorical_accuracy, tensorflow.keras.metrics.Precision(), tensorflow.keras.metrics.Recall(), tensorflow.keras.metrics.TopKCategoricalAccuracy(k=2)])
import datetime
start=datetime.datetime.now()
history = model.fit(X_train, y_train, epochs=25, validation_data=(X_val, y_val))
print(datetime.datetime.now() - start)
Epoch 1/25
716/716 [==============================] - 68s 56ms/step - loss: 0.9807 - accuracy: 0.5206 - categorical_accuracy: 1.0000 - precision: 0.5030 - recall: 0.4663 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6866 - val_accuracy: 0.5343 - val_categorical_accuracy: 1.0000 - val_precision: 0.5404 - val_recall: 0.5182 - val_top_k_categorical_accuracy: 1.0000
Epoch 2/25
716/716 [==============================] - 39s 54ms/step - loss: 0.6604 - accuracy: 0.5948 - categorical_accuracy: 1.0000 - precision: 0.5517 - recall: 0.5132 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6508 - val_accuracy: 0.6010 - val_categorical_accuracy: 1.0000 - val_precision: 0.5778 - val_recall: 0.5274 - val_top_k_categorical_accuracy: 1.0000
Epoch 3/25
716/716 [==============================] - 39s 54ms/step - loss: 0.6151 - accuracy: 0.6568 - categorical_accuracy: 1.0000 - precision: 0.5892 - recall: 0.5281 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.6066 - val_accuracy: 0.6562 - val_categorical_accuracy: 1.0000 - val_precision: 0.6139 - val_recall: 0.5418 - val_top_k_categorical_accuracy: 1.0000
Epoch 4/25
716/716 [==============================] - 39s 54ms/step - loss: 0.5834 - accuracy: 0.6898 - categorical_accuracy: 1.0000 - precision: 0.6226 - recall: 0.5463 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5765 - val_accuracy: 0.7019 - val_categorical_accuracy: 1.0000 - val_precision: 0.6413 - val_recall: 0.5600 - val_top_k_categorical_accuracy: 1.0000
Epoch 5/25
716/716 [==============================] - 39s 54ms/step - loss: 0.5567 - accuracy: 0.7147 - categorical_accuracy: 1.0000 - precision: 0.6480 - recall: 0.5639 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5282 - val_accuracy: 0.7490 - val_categorical_accuracy: 1.0000 - val_precision: 0.6625 - val_recall: 0.5769 - val_top_k_categorical_accuracy: 1.0000
Epoch 6/25
716/716 [==============================] - 39s 54ms/step - loss: 0.5402 - accuracy: 0.7238 - categorical_accuracy: 1.0000 - precision: 0.6672 - recall: 0.5821 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5336 - val_accuracy: 0.7171 - val_categorical_accuracy: 1.0000 - val_precision: 0.6799 - val_recall: 0.5943 - val_top_k_categorical_accuracy: 1.0000
Epoch 7/25
716/716 [==============================] - 39s 54ms/step - loss: 0.5096 - accuracy: 0.7492 - categorical_accuracy: 1.0000 - precision: 0.6841 - recall: 0.5978 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.5266 - val_accuracy: 0.7195 - val_categorical_accuracy: 1.0000 - val_precision: 0.6942 - val_recall: 0.6088 - val_top_k_categorical_accuracy: 1.0000
Epoch 8/25
716/716 [==============================] - 39s 54ms/step - loss: 0.4900 - accuracy: 0.7630 - categorical_accuracy: 1.0000 - precision: 0.6977 - recall: 0.6124 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4749 - val_accuracy: 0.7781 - val_categorical_accuracy: 1.0000 - val_precision: 0.7061 - val_recall: 0.6242 - val_top_k_categorical_accuracy: 1.0000
Epoch 9/25
716/716 [==============================] - 39s 55ms/step - loss: 0.4666 - accuracy: 0.7805 - categorical_accuracy: 1.0000 - precision: 0.7094 - recall: 0.6285 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4661 - val_accuracy: 0.7833 - val_categorical_accuracy: 1.0000 - val_precision: 0.7172 - val_recall: 0.6392 - val_top_k_categorical_accuracy: 1.0000
Epoch 10/25
716/716 [==============================] - 39s 55ms/step - loss: 0.4526 - accuracy: 0.7873 - categorical_accuracy: 1.0000 - precision: 0.7198 - recall: 0.6429 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4456 - val_accuracy: 0.7914 - val_categorical_accuracy: 1.0000 - val_precision: 0.7266 - val_recall: 0.6526 - val_top_k_categorical_accuracy: 1.0000
Epoch 11/25
716/716 [==============================] - 39s 55ms/step - loss: 0.4185 - accuracy: 0.8129 - categorical_accuracy: 1.0000 - precision: 0.7293 - recall: 0.6563 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4509 - val_accuracy: 0.7905 - val_categorical_accuracy: 1.0000 - val_precision: 0.7356 - val_recall: 0.6651 - val_top_k_categorical_accuracy: 1.0000
Epoch 12/25
716/716 [==============================] - 39s 55ms/step - loss: 0.3940 - accuracy: 0.8202 - categorical_accuracy: 1.0000 - precision: 0.7381 - recall: 0.6682 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4046 - val_accuracy: 0.8210 - val_categorical_accuracy: 1.0000 - val_precision: 0.7438 - val_recall: 0.6770 - val_top_k_categorical_accuracy: 1.0000
Epoch 13/25
716/716 [==============================] - 39s 54ms/step - loss: 0.3753 - accuracy: 0.8290 - categorical_accuracy: 1.0000 - precision: 0.7459 - recall: 0.6801 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4214 - val_accuracy: 0.8019 - val_categorical_accuracy: 1.0000 - val_precision: 0.7514 - val_recall: 0.6879 - val_top_k_categorical_accuracy: 1.0000
Epoch 14/25
716/716 [==============================] - 39s 54ms/step - loss: 0.3548 - accuracy: 0.8411 - categorical_accuracy: 1.0000 - precision: 0.7534 - recall: 0.6905 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.4027 - val_accuracy: 0.8167 - val_categorical_accuracy: 1.0000 - val_precision: 0.7584 - val_recall: 0.6979 - val_top_k_categorical_accuracy: 1.0000
Epoch 15/25
716/716 [==============================] - 39s 54ms/step - loss: 0.3444 - accuracy: 0.8509 - categorical_accuracy: 1.0000 - precision: 0.7603 - recall: 0.7005 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3779 - val_accuracy: 0.8267 - val_categorical_accuracy: 1.0000 - val_precision: 0.7650 - val_recall: 0.7078 - val_top_k_categorical_accuracy: 1.0000
Epoch 16/25
716/716 [==============================] - 39s 54ms/step - loss: 0.3217 - accuracy: 0.8612 - categorical_accuracy: 1.0000 - precision: 0.7667 - recall: 0.7103 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3993 - val_accuracy: 0.8243 - val_categorical_accuracy: 1.0000 - val_precision: 0.7713 - val_recall: 0.7169 - val_top_k_categorical_accuracy: 1.0000
Epoch 17/25
716/716 [==============================] - 39s 54ms/step - loss: 0.3025 - accuracy: 0.8694 - categorical_accuracy: 1.0000 - precision: 0.7730 - recall: 0.7191 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3787 - val_accuracy: 0.8319 - val_categorical_accuracy: 1.0000 - val_precision: 0.7774 - val_recall: 0.7254 - val_top_k_categorical_accuracy: 1.0000
Epoch 18/25
716/716 [==============================] - 39s 54ms/step - loss: 0.2914 - accuracy: 0.8746 - categorical_accuracy: 1.0000 - precision: 0.7790 - recall: 0.7276 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3820 - val_accuracy: 0.8390 - val_categorical_accuracy: 1.0000 - val_precision: 0.7832 - val_recall: 0.7334 - val_top_k_categorical_accuracy: 1.0000
Epoch 19/25
716/716 [==============================] - 39s 54ms/step - loss: 0.2644 - accuracy: 0.8875 - categorical_accuracy: 1.0000 - precision: 0.7848 - recall: 0.7354 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3826 - val_accuracy: 0.8295 - val_categorical_accuracy: 1.0000 - val_precision: 0.7889 - val_recall: 0.7410 - val_top_k_categorical_accuracy: 1.0000
Epoch 20/25
716/716 [==============================] - 39s 54ms/step - loss: 0.2487 - accuracy: 0.8955 - categorical_accuracy: 1.0000 - precision: 0.7904 - recall: 0.7431 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3686 - val_accuracy: 0.8414 - val_categorical_accuracy: 1.0000 - val_precision: 0.7943 - val_recall: 0.7484 - val_top_k_categorical_accuracy: 1.0000
Epoch 21/25
716/716 [==============================] - 39s 54ms/step - loss: 0.2239 - accuracy: 0.9098 - categorical_accuracy: 1.0000 - precision: 0.7957 - recall: 0.7504 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3799 - val_accuracy: 0.8367 - val_categorical_accuracy: 1.0000 - val_precision: 0.7995 - val_recall: 0.7556 - val_top_k_categorical_accuracy: 1.0000
Epoch 22/25
716/716 [==============================] - 39s 54ms/step - loss: 0.2127 - accuracy: 0.9133 - categorical_accuracy: 1.0000 - precision: 0.8008 - recall: 0.7575 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3966 - val_accuracy: 0.8329 - val_categorical_accuracy: 1.0000 - val_precision: 0.8044 - val_recall: 0.7626 - val_top_k_categorical_accuracy: 1.0000
Epoch 23/25
716/716 [==============================] - 39s 54ms/step - loss: 0.1949 - accuracy: 0.9207 - categorical_accuracy: 1.0000 - precision: 0.8057 - recall: 0.7644 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3779 - val_accuracy: 0.8381 - val_categorical_accuracy: 1.0000 - val_precision: 0.8092 - val_recall: 0.7693 - val_top_k_categorical_accuracy: 1.0000
Epoch 24/25
716/716 [==============================] - 39s 54ms/step - loss: 0.1845 - accuracy: 0.9243 - categorical_accuracy: 1.0000 - precision: 0.8104 - recall: 0.7710 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3959 - val_accuracy: 0.8362 - val_categorical_accuracy: 1.0000 - val_precision: 0.8138 - val_recall: 0.7756 - val_top_k_categorical_accuracy: 1.0000
Epoch 25/25
716/716 [==============================] - 39s 54ms/step - loss: 0.1675 - accuracy: 0.9329 - categorical_accuracy: 1.0000 - precision: 0.8150 - recall: 0.7773 - top_k_categorical_accuracy: 1.0000 - val_loss: 0.3919 - val_accuracy: 0.8400 - val_categorical_accuracy: 1.0000 - val_precision: 0.8182 - val_recall: 0.7816 - val_top_k_categorical_accuracy: 1.0000
datetime.timedelta(seconds=1004, microseconds=470660)
pd.DataFrame(history.history)
| loss | accuracy | categorical_accuracy | precision | recall | top_k_categorical_accuracy | val_loss | val_accuracy | val_categorical_accuracy | val_precision | val_recall | val_top_k_categorical_accuracy | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 0.740588 | 0.539258 | 1.0 | 0.513155 | 0.480929 | 1.0 | 0.686644 | 0.534286 | 1.0 | 0.540411 | 0.518197 | 1.0 |
| 1 | 0.652977 | 0.604891 | 1.0 | 0.559777 | 0.523186 | 1.0 | 0.650827 | 0.600952 | 1.0 | 0.577809 | 0.527431 | 1.0 |
| 2 | 0.610692 | 0.661092 | 1.0 | 0.597239 | 0.533387 | 1.0 | 0.606594 | 0.656190 | 1.0 | 0.613886 | 0.541772 | 1.0 |
| 3 | 0.579999 | 0.691878 | 1.0 | 0.628733 | 0.551825 | 1.0 | 0.576467 | 0.701905 | 1.0 | 0.641254 | 0.559987 | 1.0 |
| 4 | 0.557530 | 0.712358 | 1.0 | 0.652811 | 0.568169 | 1.0 | 0.528220 | 0.749048 | 1.0 | 0.662456 | 0.576912 | 1.0 |
| 5 | 0.529246 | 0.735371 | 1.0 | 0.671092 | 0.586116 | 1.0 | 0.533564 | 0.717143 | 1.0 | 0.679892 | 0.594281 | 1.0 |
| 6 | 0.505080 | 0.750917 | 1.0 | 0.687300 | 0.601645 | 1.0 | 0.526588 | 0.719524 | 1.0 | 0.694180 | 0.608834 | 1.0 |
| 7 | 0.483435 | 0.766288 | 1.0 | 0.700386 | 0.616295 | 1.0 | 0.474867 | 0.778095 | 1.0 | 0.706053 | 0.624231 | 1.0 |
| 8 | 0.464244 | 0.784410 | 1.0 | 0.711825 | 0.632051 | 1.0 | 0.466090 | 0.783333 | 1.0 | 0.717178 | 0.639225 | 1.0 |
| 9 | 0.443609 | 0.793493 | 1.0 | 0.721852 | 0.646105 | 1.0 | 0.445630 | 0.791429 | 1.0 | 0.726627 | 0.652645 | 1.0 |
| 10 | 0.419372 | 0.808515 | 1.0 | 0.731340 | 0.659224 | 1.0 | 0.450866 | 0.790476 | 1.0 | 0.735575 | 0.665116 | 1.0 |
| 11 | 0.396802 | 0.820830 | 1.0 | 0.739974 | 0.671091 | 1.0 | 0.404592 | 0.820952 | 1.0 | 0.743798 | 0.676982 | 1.0 |
| 12 | 0.376334 | 0.830087 | 1.0 | 0.747601 | 0.682732 | 1.0 | 0.421363 | 0.801905 | 1.0 | 0.751370 | 0.687895 | 1.0 |
| 13 | 0.360483 | 0.839476 | 1.0 | 0.755037 | 0.692902 | 1.0 | 0.402713 | 0.816667 | 1.0 | 0.758407 | 0.697865 | 1.0 |
| 14 | 0.343624 | 0.851135 | 1.0 | 0.761888 | 0.702837 | 1.0 | 0.377874 | 0.826667 | 1.0 | 0.765033 | 0.707768 | 1.0 |
| 15 | 0.321372 | 0.859476 | 1.0 | 0.768192 | 0.712456 | 1.0 | 0.399280 | 0.824286 | 1.0 | 0.771349 | 0.716866 | 1.0 |
| 16 | 0.303072 | 0.868646 | 1.0 | 0.774467 | 0.721184 | 1.0 | 0.378674 | 0.831905 | 1.0 | 0.777375 | 0.725435 | 1.0 |
| 17 | 0.288336 | 0.875764 | 1.0 | 0.780305 | 0.729483 | 1.0 | 0.381988 | 0.839048 | 1.0 | 0.783170 | 0.733389 | 1.0 |
| 18 | 0.268062 | 0.887642 | 1.0 | 0.786130 | 0.737292 | 1.0 | 0.382648 | 0.829524 | 1.0 | 0.788916 | 0.741046 | 1.0 |
| 19 | 0.252133 | 0.893493 | 1.0 | 0.791670 | 0.744836 | 1.0 | 0.368586 | 0.841429 | 1.0 | 0.794257 | 0.748401 | 1.0 |
| 20 | 0.232650 | 0.903974 | 1.0 | 0.796999 | 0.752092 | 1.0 | 0.379897 | 0.836667 | 1.0 | 0.799474 | 0.755610 | 1.0 |
| 21 | 0.217904 | 0.910611 | 1.0 | 0.802052 | 0.759161 | 1.0 | 0.396558 | 0.832857 | 1.0 | 0.804389 | 0.762567 | 1.0 |
| 22 | 0.197558 | 0.919432 | 1.0 | 0.806874 | 0.766001 | 1.0 | 0.377866 | 0.838095 | 1.0 | 0.809223 | 0.769280 | 1.0 |
| 23 | 0.187055 | 0.923362 | 1.0 | 0.811542 | 0.772435 | 1.0 | 0.395948 | 0.836190 | 1.0 | 0.813788 | 0.775595 | 1.0 |
| 24 | 0.170926 | 0.930262 | 1.0 | 0.816097 | 0.778709 | 1.0 | 0.391895 | 0.840000 | 1.0 | 0.818228 | 0.781640 | 1.0 |
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(20,20),linewidth=5.0)
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
del X_train
del X_val
jpg_list_test=os.listdir('/content/test')
x_test = [] # images as arrays
end1=0
for image in jpg_list_test:
x_test.append(cv2.resize(cv2.imread('/content/test/{}'.format(image)), (200,200), interpolation=cv2.INTER_CUBIC))
clear_output(wait=True)
end1=end1+1
print('Images Processed:', end1)
print('Percent Complete:', round((100*(end1/len(jpg_list_test))),2))
Images Processed: 12500
Percent Complete: 100
x_test=asarray(x_test, dtype='float32')
x_test=x_test/255
y_pred=model.predict(x_test, verbose=0)
y_pred
array([[8.7940879e-02],
[1.6771553e-05],
[5.5338269e-01],
...,
[3.8250811e-05],
[2.6799698e-06],
[1.0336835e-01]], dtype=float32)
y_pred
array([[8.7940879e-02],
[1.6771553e-05],
[5.5338269e-01],
...,
[3.8250811e-05],
[2.6799698e-06],
[1.0336835e-01]], dtype=float32)
y_pred.min()
1.7156144e-09
y_pred.max()
1.0
import matplotlib.pyplot as plt
image = Image.open('/content/test/{}'.format(jpg_list_test[0]))
plt.imshow(image)
<matplotlib.image.AxesImage at 0x7fca78111bd0>
images=y_pred[0:36].astype(float)
images=np.around(images, decimals=4)
labeldgct=jpg_list_test[0:36]
im = Image.open("/content/test/9981.jpg")
plt.matshow(im)
<matplotlib.image.AxesImage at 0x7fca78d0f810>
print(images[1])
array([0.])
fig, axs = plt.subplots(6, 6, figsize = (40, 40))
plt.gray()
for i, ax in enumerate(axs.flat):
i=500+i
ax.set_title('0-1, Cat-Dog: {}'.format(np.around(y_pred[i].astype(float), decimals=4)))
ax.matshow(Image.open('/content/test/{}'.format(jpg_list_test[i])))
ax.axis('off')
submission_df=pd.read_csv('/content/sample_submission.csv')
submission_df['label']=y_pred
submission_df.to_csv('/content/submission.csv', index=False)
submission_df
| id | label | |
|---|---|---|
| 0 | 1 | 0.087941 |
| 1 | 2 | 0.000017 |
| 2 | 3 | 0.553383 |
| 3 | 4 | 0.137030 |
| 4 | 5 | 0.999101 |
| ... | ... | ... |
| 12495 | 12496 | 0.002683 |
| 12496 | 12497 | 0.069729 |
| 12497 | 12498 | 0.000038 |
| 12498 | 12499 | 0.000003 |
| 12499 | 12500 | 0.103368 |
12500 rows × 2 columns
# from keras.utils.vis_utils import plot_model
# plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
!kaggle competitions submit -c dogs-vs-cats-redux-kernels-edition -f /content/submission.csv -m 'CNN3'
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
100% 196k/196k [00:02<00:00, 94.9kB/s]
Successfully submitted to Dogs vs. Cats Redux: Kernels Edition
3
!kaggle competitions submissions -c dogs-vs-cats-redux-kernels-edition
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
fileName date description status publicScore privateScore
--------------------- ------------------- ----------- -------- ----------- ------------
submission.csv 2021-06-10 01:39:26 CNN3 complete 2.22274 2.22274
submission.csv 2021-06-10 00:28:53 CNN3 complete 3.90852 3.90852
submission.csv 2021-06-09 23:55:44 CNN3 complete 2.50312 2.50312
submission.csv 2021-06-09 23:33:45 CNN3 complete 5.45001 5.45001
submission.csv 2021-06-09 22:25:38 CNN3 complete 1.62022 1.62022
submission.csv 2021-06-09 22:06:16 CNN3 complete 7.65715 7.65715
submission.csv 2021-06-09 20:42:45 CNN3 complete 3.71867 3.71867
submission.csv 2021-06-08 02:44:00 CNN2 complete 6.42962 6.42962
sample_submission.csv 2021-06-08 02:11:27 CNN1 complete 0.69314 0.69314
### MY KAGGLE USERNAME IS michaelrocchio2
from keras.utils.vis_utils import plot_model
plot_model(model, to_file='/content/model_plot.png', show_shapes=True, show_layer_names=True)